Special Resource Operator
Learn about the Special Resource Operator (SRO) and how you can use it to build and manage driver containers for loading kernel modules and device drivers on nodes in an OKD cluster.
The Special Resource Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/. |
About the Special Resource Operator
The Special Resource Operator (SRO) helps you manage the deployment of kernel modules and drivers on an existing OKD cluster. The SRO can be used for a case as simple as building and loading a single kernel module, or as complex as deploying the driver, device plug-in, and monitoring stack for a hardware accelerator.
For loading kernel modules, the SRO is designed around the use of driver containers. Driver containers are increasingly being used in cloud-native environments, especially when run on pure container operating systems, to deliver hardware drivers to the host. Driver containers extend the kernel stack beyond the out-of-the-box software and hardware features of a specific kernel. Driver containers work on various container-capable Linux distributions. With driver containers, the host operating system stays clean and there is no clash between different library versions or binaries on the host.
The functions described require a connected environment with a constant connection to the network. These functions are not available for disconnected environments. |
Installing the Special Resource Operator
As a cluster administrator, you can install the Special Resource Operator (SRO) by using the OpenShift CLI or the web console.
Installing the Special Resource Operator by using the CLI
As a cluster administrator, you can install the Special Resource Operator (SRO) by using the OpenShift CLI.
Prerequisites
You have a running OKD cluster.
You installed the OpenShift CLI (
oc
).You are logged into the OpenShift CLI as a user with
cluster-admin
privileges.
Procedure
Install the SRO in the
openshift-operators
namespace:Create the following
Subscription
CR and save the YAML in thesro-sub.yaml
file:Example Subscription CR
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-special-resource-operator
namespace: openshift-operators
spec:
channel: "stable"
installPlanApproval: Automatic
name: openshift-special-resource-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
Create the subscription object by running the following command:
$ oc create -f sro-sub.yaml
Switch to the
openshift-operators
project:$ oc project openshift-operators
Verification
To verify that the Operator deployment is successful, run:
$ oc get pods
Example output
NAME READY STATUS RESTARTS AGE
nfd-controller-manager-7f4c5f5778-4lvvk 2/2 Running 0 89s
special-resource-controller-manager-6dbf7d4f6f-9kl8h 2/2 Running 0 81s
A successful deployment shows a
Running
status.
Installing the Special Resource Operator by using the web console
As a cluster administrator, you can install the Special Resource Operator (SRO) by using the OKD web console.
Procedure
Log in to the OKD web console.
Install the Special Resource Operator:
In the OKD web console, click Operators → OperatorHub.
Choose Special Resource Operator from the list of available Operators, and then click Install.
On the Install Operator page, select a specific namespace on the cluster, select the namespace created in the previous section, and then click Install.
Verification
To verify that the Special Resource Operator installed successfully:
Navigate to the Operators → Installed Operators page.
Ensure that Special Resource Operator is listed in the openshift-operators project with a Status of InstallSucceeded.
During installation, an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message.
If the Operator does not appear as installed, to troubleshoot further:
Navigate to the Operators → Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status.
Navigate to the Workloads → Pods page and check the logs for pods in the
openshift-operators
project.
Using the Special Resource Operator
The Special Resource Operator (SRO) is used to manage the build and deployment of a driver container. The objects required to build and deploy the container can be defined in a Helm chart.
The example in this section uses the simple-kmod SpecialResource
object to point to a ConfigMap
object that is created to store the Helm charts.
Building and running the simple-kmod SpecialResource by using a config map
In this example, the simple-kmod kernel module shows how the Special Resource Operator (SRO) manages a driver container. The container is defined in the Helm chart templates that are stored in a config map.
Prerequisites
You have a running OKD cluster.
You set the Image Registry Operator state to
Managed
for your cluster.You installed the OpenShift CLI (
oc
).You are logged into the OpenShift CLI as a user with
cluster-admin
privileges.You installed the Node Feature Discovery (NFD) Operator.
You installed the SRO.
You installed the Helm CLI (
helm
).
Procedure
To create a simple-kmod
SpecialResource
object, define an image stream and build config to build the image, and a service account, role, role binding, and daemon set to run the container. The service account, role, and role binding are required to run the daemon set with the privileged security context so that the kernel module can be loaded.Create a
templates
directory, and change into it:$ mkdir -p chart/simple-kmod-0.0.1/templates
$ cd chart/simple-kmod-0.0.1/templates
Save this YAML template for the image stream and build config in the
templates
directory as0000-buildconfig.yaml
:apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
labels:
app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} (1)
name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} (1)
spec: {}
---
apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
labels:
app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverBuild}} (1)
name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverBuild}} (1)
annotations:
specialresource.openshift.io/wait: "true"
specialresource.openshift.io/driver-container-vendor: simple-kmod
specialresource.openshift.io/kernel-affine: "true"
spec:
nodeSelector:
node-role.kubernetes.io/worker: ""
runPolicy: "Serial"
triggers:
- type: "ConfigChange"
- type: "ImageChange"
source:
git:
ref: {{.Values.specialresource.spec.driverContainer.source.git.ref}}
uri: {{.Values.specialresource.spec.driverContainer.source.git.uri}}
type: Git
strategy:
dockerStrategy:
dockerfilePath: Dockerfile.SRO
buildArgs:
- name: "IMAGE"
value: {{ .Values.driverToolkitImage }}
{{- range $arg := .Values.buildArgs }}
- name: {{ $arg.name }}
value: {{ $arg.value }}
{{- end }}
- name: KVER
value: {{ .Values.kernelFullVersion }}
output:
to:
kind: ImageStreamTag
name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}:v{{.Values.kernelFullVersion}} (1)
1 The templates such as {{.Values.specialresource.metadata.name}}
are filled in by the SRO, based on fields in theSpecialResource
CR and variables known to the Operator such as{{.Values.KernelFullVersion}}
.Save the following YAML template for the RBAC resources and daemon set in the
templates
directory as1000-driver-container.yaml
:apiVersion: v1
kind: ServiceAccount
metadata:
name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
rules:
- apiGroups:
- security.openshift.io
resources:
- securitycontextconstraints
verbs:
- use
resourceNames:
- privileged
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
subjects:
- kind: ServiceAccount
name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
namespace: {{.Values.specialresource.spec.namespace}}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
annotations:
specialresource.openshift.io/wait: "true"
specialresource.openshift.io/state: "driver-container"
specialresource.openshift.io/driver-container-vendor: simple-kmod
specialresource.openshift.io/kernel-affine: "true"
specialresource.openshift.io/from-configmap: "true"
spec:
updateStrategy:
type: OnDelete
selector:
matchLabels:
app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
template:
metadata:
# Mark this pod as a critical add-on; when enabled, the critical add-on scheduler
# reserves resources for critical add-on pods so that they can be rescheduled after
# a failure. This annotation works in tandem with the toleration below.
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
spec:
serviceAccount: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
serviceAccountName: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
containers:
- image: image-registry.openshift-image-registry.svc:5000/{{.Values.specialresource.spec.namespace}}/{{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}:v{{.Values.kernelFullVersion}}
name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
imagePullPolicy: Always
command: ["/sbin/init"]
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "systemctl stop kmods-via-containers@{{.Values.specialresource.metadata.name}}"]
securityContext:
privileged: true
nodeSelector:
node-role.kubernetes.io/worker: ""
feature.node.kubernetes.io/kernel-version.full: "{{.Values.KernelFullVersion}}"
Change into the
chart/simple-kmod-0.0.1
directory:$ cd ..
Save the following YAML for the chart as
Chart.yaml
in thechart/simple-kmod-0.0.1
directory:apiVersion: v2
name: simple-kmod
description: Simple kmod will deploy a simple kmod driver-container
icon: https://avatars.githubusercontent.com/u/55542927
type: application
version: 0.0.1
appVersion: 1.0.0
From the
chart
directory, create the chart using thehelm package
command:$ helm package simple-kmod-0.0.1/
Example output
Successfully packaged chart and saved it to: /data/<username>/git/<github_username>/special-resource-operator/yaml-for-docs/chart/simple-kmod-0.0.1/simple-kmod-0.0.1.tgz
Create a config map to store the chart files:
Create a directory for the config map files:
$ mkdir cm
Copy the Helm chart into the
cm
directory:$ cp simple-kmod-0.0.1.tgz cm/simple-kmod-0.0.1.tgz
Create an index file specifying the Helm repo that contains the Helm chart:
$ helm repo index cm --url=cm://simple-kmod/simple-kmod-chart
Create a namespace for the objects defined in the Helm chart:
$ oc create namespace simple-kmod
Create the config map object:
$ oc create cm simple-kmod-chart --from-file=cm/index.yaml --from-file=cm/simple-kmod-0.0.1.tgz -n simple-kmod
Use the following
SpecialResource
manifest to deploy the simple-kmod object using the Helm chart that you created in the config map. Save this YAML assimple-kmod-configmap.yaml
:apiVersion: sro.openshift.io/v1beta1
kind: SpecialResource
metadata:
name: simple-kmod
spec:
#debug: true (1)
namespace: simple-kmod
chart:
name: simple-kmod
version: 0.0.1
repository:
name: example
url: cm://simple-kmod/simple-kmod-chart (2)
set:
kind: Values
apiVersion: sro.openshift.io/v1beta1
kmodNames: ["simple-kmod", "simple-procfs-kmod"]
buildArgs:
- name: "KMODVER"
value: "SRO"
driverContainer:
source:
git:
ref: "master"
uri: "https://github.com/openshift-psap/kvc-simple-kmod.git"
1 Optional: Uncomment the #debug: true
line to have the YAML files in the chart printed in full in the Operator logs and to verify that the logs are created and templated properly.2 The spec.chart.repository.url
field tells the SRO to look for the chart in a config map.From a command line, create the
SpecialResource
file:$ oc create -f simple-kmod-configmap.yaml
To remove the simple-kmod kernel module from the node, delete the simple-kmod |
Verification
The simple-kmod
resources are deployed in the simple-kmod
namespace as specified in the object manifest. After a short time, the build pod for the simple-kmod
driver container starts running. The build completes after a few minutes, and then the driver container pods start running.
Use
oc get pods
command to display the status of the build pods:$ oc get pods -n simple-kmod
Example output
NAME READY STATUS RESTARTS AGE
simple-kmod-driver-build-12813789169ac0ee-1-build 0/1 Completed 0 7m12s
simple-kmod-driver-container-12813789169ac0ee-mjsnh 1/1 Running 0 8m2s
simple-kmod-driver-container-12813789169ac0ee-qtkff 1/1 Running 0 8m2s
Use the
oc logs
command, along with the build pod name obtained from theoc get pods
command above, to display the logs of the simple-kmod driver container image build:$ oc logs pod/simple-kmod-driver-build-12813789169ac0ee-1-build -n simple-kmod
To verify that the simple-kmod kernel modules are loaded, execute the
lsmod
command in one of the driver container pods that was returned from theoc get pods
command above:$ oc exec -n simple-kmod -it pod/simple-kmod-driver-container-12813789169ac0ee-mjsnh -- lsmod | grep simple
Example output
simple_procfs_kmod 16384 0
simple_kmod 16384 0
The |
Building and running the simple-kmod SpecialResource for a hub-and-spoke topology
You can use the Special Resource Operator (SRO) on a hub-and-spoke deployment in which Red Hat Advanced Cluster Management (RHACM) connects a hub cluster to one or more managed clusters.
This example procedure shows how the SRO builds driver containers in the hub. The SRO watches hub cluster resources to identify OKD versions for the helm charts that it uses to create resources which it delivers to spokes.
Prerequisites
You have a running OKD cluster.
You installed the OpenShift CLI (
oc
).You are logged into the OpenShift CLI as a user with
cluster-admin
privileges.You installed the SRO.
You installed the Helm CLI (
helm
).You installed Red Hat Advanced Cluster Management (RHACM).
You configured a container registry.
Procedure
Create a
templates
directory by running the following command:$ mkdir -p charts/acm-simple-kmod-0.0.1/templates
Change to the
templates
directory by running the following command:$ cd charts/acm-simple-kmod-0.0.1/templates
Create templates files for the
BuildConfig
,Policy
, andPlacementRule
resources.Save this YAML template for the image stream and build config in the
templates
directory as0001-buildconfig.yaml
.apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
labels:
app: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }}
name: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }}
annotations:
specialresource.openshift.io/wait: "true"
spec:
nodeSelector:
node-role.kubernetes.io/worker: ""
runPolicy: "Serial"
triggers:
- type: "ConfigChange"
- type: "ImageChange"
source:
dockerfile: |
FROM {{ .Values.driverToolkitImage }} as builder
WORKDIR /build/
RUN git clone -b {{.Values.specialResourceModule.spec.set.git.ref}} {{.Values.specialResourceModule.spec.set.git.uri}}
WORKDIR /build/simple-kmod
RUN make all install KVER={{ .Values.kernelFullVersion }}
FROM registry.redhat.io/ubi8/ubi-minimal
RUN microdnf -y install kmod
COPY --from=builder /etc/driver-toolkit-release.json /etc/
COPY --from=builder /lib/modules/{{ .Values.kernelFullVersion }}/* /lib/modules/{{ .Values.kernelFullVersion }}/
strategy:
dockerStrategy:
dockerfilePath: Dockerfile.SRO
buildArgs:
- name: "IMAGE"
value: {{ .Values.driverToolkitImage }}
{{- range $arg := .Values.buildArgs }}
- name: {{ $arg.name }}
value: {{ $arg.value }}
{{- end }}
- name: KVER
value: {{ .Values.kernelFullVersion }}
output:
to:
kind: DockerImage
name: {{.Values.registry}}/{{.Values.specialResourceModule.metadata.name}}-{{.Values.groupName.driverContainer}}:{{.Values.kernelFullVersion}}
Save this YAML template for the ACM policy in the
templates
directory as0002-policy.yaml
.apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
name: policy-{{.Values.specialResourceModule.metadata.name}}-ds
annotations:
policy.open-cluster-management.io/categories: CM Configuration Management
policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
policy.open-cluster-management.io/standards: NIST-CSF
spec:
remediationAction: enforce
disabled: false
policy-templates:
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: config-{{.Values.specialResourceModule.metadata.name}}-ds
spec:
remediationAction: enforce
severity: low
namespaceselector:
exclude:
- kube-*
include:
- '*'
object-templates:
- complianceType: musthave
objectDefinition:
apiVersion: v1
kind: Namespace
metadata:
name: {{.Values.specialResourceModule.spec.namespace}}
- complianceType: mustonlyhave
objectDefinition:
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{.Values.specialResourceModule.metadata.name}}
namespace: {{.Values.specialResourceModule.spec.namespace}}
- complianceType: mustonlyhave
objectDefinition:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{.Values.specialResourceModule.metadata.name}}
namespace: {{.Values.specialResourceModule.spec.namespace}}
rules:
- apiGroups:
- security.openshift.io
resources:
- securitycontextconstraints
verbs:
- use
resourceNames:
- privileged
- complianceType: mustonlyhave
objectDefinition:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{.Values.specialResourceModule.metadata.name}}
namespace: {{.Values.specialResourceModule.spec.namespace}}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{.Values.specialResourceModule.metadata.name}}
subjects:
- kind: ServiceAccount
name: {{.Values.specialResourceModule.metadata.name}}
namespace: {{.Values.specialResourceModule.spec.namespace}}
- complianceType: musthave
objectDefinition:
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }}
name: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }}
namespace: {{.Values.specialResourceModule.spec.namespace}}
spec:
updateStrategy:
type: OnDelete
selector:
matchLabels:
app: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }}
template:
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
app: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }}
spec:
serviceAccount: {{.Values.specialResourceModule.metadata.name}}
serviceAccountName: {{.Values.specialResourceModule.metadata.name}}
containers:
- image: {{.Values.registry}}/{{.Values.specialResourceModule.metadata.name}}-{{.Values.groupName.driverContainer}}:{{.Values.kernelFullVersion}}
name: {{.Values.specialResourceModule.metadata.name}}
imagePullPolicy: Always
command: [sleep, infinity]
lifecycle:
preStop:
exec:
command: ["modprobe", "-r", "-a" , "simple-kmod", "simple-procfs-kmod"]
securityContext:
privileged: true
Save this YAML template for the placement of policies in the
templates
directory as0003-policy.yaml
.apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
metadata:
name: {{.Values.specialResourceModule.metadata.name}}-placement
spec:
clusterConditions:
- status: "True"
type: ManagedClusterConditionAvailable
clusterSelector:
matchExpressions:
- key: name
operator: NotIn
values:
- local-cluster
---
apiVersion: policy.open-cluster-management.io/v1
kind: PlacementBinding
metadata:
name: {{.Values.specialResourceModule.metadata.name}}-binding
placementRef:
apiGroup: apps.open-cluster-management.io
kind: PlacementRule
name: {{.Values.specialResourceModule.metadata.name}}-placement
subjects:
- apiGroup: policy.open-cluster-management.io
kind: Policy
name: policy-{{.Values.specialResourceModule.metadata.name}}-ds
Change into the
charts/acm-simple-kmod-0.0.1
directory by running the following command:cd ..
Save the following YAML template for the chart as
Chart.yaml
in thecharts/acm-simple-kmod-0.0.1
directory:apiVersion: v2
name: acm-simple-kmod
description: Build ACM enabled simple-kmod driver with SpecialResourceOperator
icon: https://avatars.githubusercontent.com/u/55542927
type: application
version: 0.0.1
appVersion: 1.6.4
From the
charts
directory, create the chart using the command:$ helm package acm-simple-kmod-0.0.1/
Example output
Successfully packaged chart and saved it to: <directory>/charts/acm-simple-kmod-0.0.1.tgz
Create a config map to store the chart files.
Create a directory for the config map files by running the following command:
$ mkdir cm
Copy the Helm chart into the
cm
directory by running the following command:$ cp acm-simple-kmod-0.0.1.tgz cm/acm-simple-kmod-0.0.1.tgz
Create an index file specifying the Helm repository that contains the Helm chart by running the following command:
$ helm repo index cm --url=cm://acm-simple-kmod/acm-simple-kmod-chart
Create a namespace for the objects defined in the Helm chart by running the following command:
$ oc create namespace acm-simple-kmod
Create the config map object by running the following command:
$ oc create cm acm-simple-kmod-chart --from-file=cm/index.yaml --from-file=cm/acm-simple-kmod-0.0.1.tgz -n acm-simple-kmod
Use the following
SpecialResourceModule
manifest to deploy thesimple-kmod
object using the Helm chart that you created in the config map. Save this YAML file asacm-simple-kmod.yaml
:apiVersion: sro.openshift.io/v1beta1
kind: SpecialResourceModule
metadata:
name: acm-simple-kmod
spec:
namespace: acm-simple-kmod
chart:
name: acm-simple-kmod
version: 0.0.1
repository:
name: acm-simple-kmod
url: cm://acm-simple-kmod/acm-simple-kmod-chart
set:
kind: Values
apiVersion: sro.openshift.io/v1beta1
buildArgs:
- name: "KMODVER"
value: "SRO"
registry: <your_registry> (1)
git:
ref: master
uri: https://github.com/openshift-psap/kvc-simple-kmod.git
watch:
- path: "$.metadata.labels.openshiftVersion"
apiVersion: cluster.open-cluster-management.io/v1
kind: ManagedCluster
name: spoke1
1 Specify the URL for a registry that you have configured. Create the special resource module by running the following command:
$ oc apply -f charts/examples/acm-simple-kmod.yaml
Verification
Check the status of the build pods by running the following command:
$ KUBECONFIG=~/hub/auth/kubeconfig oc get pod -n acm-simple-kmod
Example output
NAME READY STATUS RESTARTS AGE
acm-simple-kmod-4-18-0-305-34-2-el8-4-x86-64-1-build 0/1 Completed 0 42m
Check that the policies have been created by running the following command:
$ KUBECONFIG=~/hub/auth/kubeconfig oc get placementrules,placementbindings,policies -n acm-simple-kmod
Example output
NAME AGE REPLICAS
placementrule.apps.open-cluster-management.io/acm-simple-kmod-placement 40m
NAME AGE
placementbinding.policy.open-cluster-management.io/acm-simple-kmod-binding 40m
NAME REMEDIATION ACTION COMPLIANCE STATE AGE
policy.policy.open-cluster-management.io/policy-acm-simple-kmod-ds enforce Compliant 40m
Check that the resources have been reconciled by running the following command:
$ KUBECONFIG=~/hub/auth/kubeconfig oc get specialresourcemodule acm-simple-kmod -o json | jq -r '.status'
Example output
{
"versions": {
"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a3330ef5a178435721ff4efdde762261a9c55212e9b4534385e04037693fbe4": {
"complete": true
}
}
}
Check that the resources are running in the spoke by running the following command:
$ KUBECONFIG=~/spoke1/kubeconfig oc get ds,pod -n acm-simple-kmod
Example output
AME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/acm-simple-kmod-4-18-0-305-45-1-el8-4-x86-64 3 3 3 3 3 <none> 26m
NAME READY STATUS RESTARTS AGE
pod/acm-simple-kmod-4-18-0-305-45-1-el8-4-x86-64-brw78 1/1 Running 0 26m
pod/acm-simple-kmod-4-18-0-305-45-1-el8-4-x86-64-fqh5h 1/1 Running 0 26m
pod/acm-simple-kmod-4-18-0-305-45-1-el8-4-x86-64-m9sfd 1/1 Running 0 26m
Prometheus Special Resource Operator metrics
The Special Resource Operator (SRO) exposes the following Prometheus metrics through the metrics
service:
Metric Name | Description |
---|---|
| Returns the nodes that are running pods created by a SRO custom resource (CR). This metric is available for |
| Represents whether a |
| Represents whether the SRO has finished processing a CR successfully (value |
| Returns the number of SRO CRs in the cluster, regardless of their state. |
Additional resources
For information about restoring the Image Registry Operator state before using the Special Resource Operator, see Image registry removed during installation.
For details about installing the NFD Operator see Node Feature Discovery (NFD) Operator.