Managing Compliance Operator result and remediation

Each ComplianceCheckResult represents a result of one compliance rule check. If the rule can be remediated automatically, a ComplianceRemediation object with the same name, owned by the ComplianceCheckResult is created. Unless requested, the remediations are not applied automatically, which gives an OKD administrator the opportunity to review what the remediation does and only apply a remediation once it has been verified.

Filters for compliance check results

By default, the ComplianceCheckResult objects are labeled with several useful labels that allow you to query the checks and decide on the next steps after the results are generated.

List checks that belong to a specific suite:

  1. $ oc get compliancecheckresults -l compliance.openshift.io/suite=example-compliancesuite

List checks that belong to a specific scan:

  1. $ oc get compliancecheckresults -l compliance.openshift.io/scan=example-compliancescan

Not all ComplianceCheckResult objects create ComplianceRemediation objects. Only ComplianceCheckResult objects that can be remediated automatically do. A ComplianceCheckResult object has a related remediation if it is labeled with the compliance.openshift.io/automated-remediation label. The name of the remediation is the same as the name of the check.

List all failing checks that can be remediated automatically:

  1. $ oc get compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/automated-remediation'

List all failing checks that must be remediated manually:

  1. $ oc get compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,!compliance.openshift.io/automated-remediation'

The manual remediation steps are typically stored in the description attribute in the ComplianceCheckResult object.

Table 1. ComplianceCheckResult Status
ComplianceCheckResult StatusDescription

PASS

Compliance check ran to completion and passed.

FAIL

Compliance check ran to completion and failed.

INFO

Compliance check ran to completion and found something not severe enough to be considered an error.

MANUAL

Compliance check does not have a way to automatically assess the success or failure and must be checked manually.

INCONSISTENT

Compliance check reports different results from different sources, typically cluster nodes.

ERROR

Compliance check ran, but could not complete properly.

NOT-APPLICABLE

Compliance check did not run because it is not applicable or not selected.

Reviewing a remediation

Review both the ComplianceRemediation object and the ComplianceCheckResult object that owns the remediation. The ComplianceCheckResult object contains human-readable descriptions of what the check does and the hardening trying to prevent, as well as other metadata like the severity and the associated security controls. The ComplianceRemediation object represents a way to fix the problem described in the ComplianceCheckResult. After first scan, check for remediations with the state MissingDependencies.

Below is an example of a check and a remediation called sysctl-net-ipv4-conf-all-accept-redirects. This example is redacted to only show spec and status and omits metadata:

  1. spec:
  2. apply: false
  3. current:
  4. object:
  5. apiVersion: machineconfiguration.openshift.io/v1
  6. kind: MachineConfig
  7. spec:
  8. config:
  9. ignition:
  10. version: 3.2.0
  11. storage:
  12. files:
  13. - path: /etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf
  14. mode: 0644
  15. contents:
  16. source: data:,net.ipv4.conf.all.accept_redirects%3D0
  17. outdated: {}
  18. status:
  19. applicationState: NotApplied

The remediation payload is stored in the spec.current attribute. The payload can be any Kubernetes object, but because this remediation was produced by a node scan, the remediation payload in the above example is a MachineConfig object. For Platform scans, the remediation payload is often a different kind of an object (for example, a ConfigMap or Secret object), but typically applying that remediation is up to the administrator, because otherwise the Compliance Operator would have required a very broad set of permissions to manipulate any generic Kubernetes object. An example of remediating a Platform check is provided later in the text.

To see exactly what the remediation does when applied, the MachineConfig object contents use the Ignition objects for the configuration. See the Ignition specification for further information about the format. In our example, the spec.config.storage.files[0].path attribute specifies the file that is being create by this remediation (/etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf) and the spec.config.storage.files[0].contents.source attribute specifies the contents of that file.

The contents of the files are URL-encoded.

Use the following Python script to view the contents:

  1. $ echo "net.ipv4.conf.all.accept_redirects%3D0" | python3 -c "import sys, urllib.parse; print(urllib.parse.unquote(''.join(sys.stdin.readlines())))"

Example output

  1. net.ipv4.conf.all.accept_redirects=0

Applying remediation when using customized machine config pools

When you create a custom MachineConfigPool, add a label to the MachineConfigPool so that machineConfigPoolSelector present in the KubeletConfig can match the label with MachineConfigPool.

Do not set protectKernelDefaults: false in the KubeletConfig file, because the MachineConfigPool object might fail to unpause unexpectedly after the Compliance Operator finishes applying remediation.

Procedure

  1. List the nodes.

    1. $ oc get nodes

    Example output

    1. NAME STATUS ROLES AGE VERSION
    2. ip-10-0-128-92.us-east-2.compute.internal Ready master 5h21m v1.24.0
    3. ip-10-0-158-32.us-east-2.compute.internal Ready worker 5h17m v1.24.0
    4. ip-10-0-166-81.us-east-2.compute.internal Ready worker 5h17m v1.24.0
    5. ip-10-0-171-170.us-east-2.compute.internal Ready master 5h21m v1.24.0
    6. ip-10-0-197-35.us-east-2.compute.internal Ready master 5h22m v1.24.0
  2. Add a label to nodes.

    1. $ oc label node ip-10-0-166-81.us-east-2.compute.internal node-role.kubernetes.io/<machine_config_pool_name>=

    Example output

    1. node/ip-10-0-166-81.us-east-2.compute.internal labeled
  3. Create custom MachineConfigPool CR.

    1. apiVersion: machineconfiguration.openshift.io/v1
    2. kind: MachineConfigPool
    3. metadata:
    4. name: <machine_config_pool_name>
    5. labels:
    6. pools.operator.machineconfiguration.openshift.io/<machine_config_pool_name>: '' (1)
    7. spec:
    8. machineConfigSelector:
    9. matchExpressions:
    10. - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,<machine_config_pool_name>]}
    11. nodeSelector:
    12. matchLabels:
    13. node-role.kubernetes.io/<machine_config_pool_name>: ""
    1The labels field defines label name to add for Machine config pool(MCP).
  4. Verify MCP created successfully.

    1. $ oc get mcp -w

Applying a remediation

The boolean attribute spec.apply controls whether the remediation should be applied by the Compliance Operator. You can apply the remediation by setting the attribute to true:

  1. $ oc patch complianceremediations/<scan_name>-sysctl-net-ipv4-conf-all-accept-redirects --patch '{"spec":{"apply":true}}' --type=merge

After the Compliance Operator processes the applied remediation, the status.ApplicationState attribute would change to Applied or to Error if incorrect. When a machine config remediation is applied, that remediation along with all other applied remediations are rendered into a MachineConfig object named 75-$scan-name-$suite-name. That MachineConfig object is subsequently rendered by the Machine Config Operator and finally applied to all the nodes in a machine config pool by an instance of the machine control daemon running on each node.

Note that when the Machine Config Operator applies a new MachineConfig object to nodes in a pool, all the nodes belonging to the pool are rebooted. This might be inconvenient when applying multiple remediations, each of which re-renders the composite 75-$scan-name-$suite-name MachineConfig object. To prevent applying the remediation immediately, you can pause the machine config pool by setting the .spec.paused attribute of a MachineConfigPool object to true.

Make sure the pools are unpaused when the CA certificate rotation happens. If the MCPs are paused, the MCO cannot push the newly rotated certificates to those nodes. This causes the cluster to become degraded and causes failure in multiple oc commands, including oc debug, oc logs, oc exec, and oc attach. You receive alerts in the Alerting UI of the OKD web console if an MCP is paused when the certificates are rotated.

The Compliance Operator can apply remediations automatically. Set autoApplyRemediations: true in the ScanSetting top-level object.

Applying remediations automatically should only be done with careful consideration.

Remediating a platform check manually

Checks for Platform scans typically have to be remediated manually by the administrator for two reasons:

  • It is not always possible to automatically determine the value that must be set. One of the checks requires that a list of allowed registries is provided, but the scanner has no way of knowing which registries the organization wants to allow.

  • Different checks modify different API objects, requiring automated remediation to possess root or superuser access to modify objects in the cluster, which is not advised.

Procedure

  1. The example below uses the ocp4-ocp-allowed-registries-for-import rule, which would fail on a default OKD installation. Inspect the rule oc get rule.compliance/ocp4-ocp-allowed-registries-for-import -oyaml, the rule is to limit the registries the users are allowed to import images from by setting the allowedRegistriesForImport attribute, The warning attribute of the rule also shows the API object checked, so it can be modified and remediate the issue:

    1. $ oc edit image.config.openshift.io/cluster

    Example output

    1. apiVersion: config.openshift.io/v1
    2. kind: Image
    3. metadata:
    4. annotations:
    5. release.openshift.io/create-only: "true"
    6. creationTimestamp: "2020-09-10T10:12:54Z"
    7. generation: 2
    8. name: cluster
    9. resourceVersion: "363096"
    10. selfLink: /apis/config.openshift.io/v1/images/cluster
    11. uid: 2dcb614e-2f8a-4a23-ba9a-8e33cd0ff77e
    12. spec:
    13. allowedRegistriesForImport:
    14. - domainName: registry.redhat.io
    15. status:
    16. externalRegistryHostnames:
    17. - default-route-openshift-image-registry.apps.user-cluster-09-10-12-07.devcluster.openshift.com
    18. internalRegistryHostname: image-registry.openshift-image-registry.svc:5000
  2. Re-run the scan:

    1. $ oc annotate compliancescans/<scan_name> compliance.openshift.io/rescan=

Updating remediations

When a new version of compliance content is used, it might deliver a new and different version of a remediation than the previous version. The Compliance Operator will keep the old version of the remediation applied. The OKD administrator is also notified of the new version to review and apply. A ComplianceRemediation object that had been applied earlier, but was updated changes its status to Outdated. The outdated objects are labeled so that they can be searched for easily.

The previously applied remediation contents would then be stored in the spec.outdated attribute of a ComplianceRemediation object and the new updated contents would be stored in the spec.current attribute. After updating the content to a newer version, the administrator then needs to review the remediation. As long as the spec.outdated attribute exists, it would be used to render the resulting MachineConfig object. After the spec.outdated attribute is removed, the Compliance Operator re-renders the resulting MachineConfig object, which causes the Operator to push the configuration to the nodes.

Procedure

  1. Search for any outdated remediations:

    1. $ oc get complianceremediations -lcomplianceoperator.openshift.io/outdated-remediation=

    Example output

    1. NAME STATE
    2. workers-scan-no-empty-passwords Outdated

    The currently applied remediation is stored in the Outdated attribute and the new, unapplied remediation is stored in the Current attribute. If you are satisfied with the new version, remove the Outdated field. If you want to keep the updated content, remove the Current and Outdated attributes.

  2. Apply the newer version of the remediation:

    1. $ oc patch complianceremediations workers-scan-no-empty-passwords --type json -p '[{"op":"remove", "path":/spec/outdated}]'
  3. The remediation state will switch from Outdated to Applied:

    1. $ oc get complianceremediations workers-scan-no-empty-passwords

    Example output

    1. NAME STATE
    2. workers-scan-no-empty-passwords Applied
  4. The nodes will apply the newer remediation version and reboot.

Unapplying a remediation

It might be required to unapply a remediation that was previously applied.

Procedure

  1. Set the apply flag to false:

    1. $ oc patch complianceremediations/<scan_name>-sysctl-net-ipv4-conf-all-accept-redirects -p '{"spec":{"apply":false}}' --type=merge
  2. The remediation status will change to NotApplied and the composite MachineConfig object would be re-rendered to not include the remediation.

    All affected nodes with the remediation will be rebooted.

Removing a KubeletConfig remediation

KubeletConfig remediations are included in node-level profiles. In order to remove a KubeletConfig remediation, you must manually remove it from the KubeletConfig objects. This example demonstrates how to remove the compliance check for the one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available remediation.

Procedure

  1. Locate the scan-name and compliance check for the one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available remediation:

    1. $ oc get remediation one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -o yaml

    Example output

    1. apiVersion: compliance.openshift.io/v1alpha1
    2. kind: ComplianceRemediation
    3. metadata:
    4. annotations:
    5. compliance.openshift.io/xccdf-value-used: var-kubelet-evictionhard-imagefs-available
    6. creationTimestamp: "2022-01-05T19:52:27Z"
    7. generation: 1
    8. labels:
    9. compliance.openshift.io/scan-name: one-rule-tp-node-master (1)
    10. compliance.openshift.io/suite: one-rule-ssb-node
    11. name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available
    12. namespace: openshift-compliance
    13. ownerReferences:
    14. - apiVersion: compliance.openshift.io/v1alpha1
    15. blockOwnerDeletion: true
    16. controller: true
    17. kind: ComplianceCheckResult
    18. name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available
    19. uid: fe8e1577-9060-4c59-95b2-3e2c51709adc
    20. resourceVersion: "84820"
    21. uid: 5339d21a-24d7-40cb-84d2-7a2ebb015355
    22. spec:
    23. apply: true
    24. current:
    25. object:
    26. apiVersion: machineconfiguration.openshift.io/v1
    27. kind: KubeletConfig
    28. spec:
    29. kubeletConfig:
    30. evictionHard:
    31. imagefs.available: 10% (2)
    32. outdated: {}
    33. type: Configuration
    34. status:
    35. applicationState: Applied
    1The scan name of the remediation.
    2The remediation that was added to the KubeletConfig objects.

    If the remediation invokes an evictionHard kubelet configuration, you must specify all of the evictionHard parameters: memory.available, nodefs.available, nodefs.inodesFree, imagefs.available, and imagefs.inodesFree. If you do not specify all parameters, only the specified parameters are applied and the remediation will not function properly.

  2. Remove the remediation:

    1. Set apply to false for the remediation object:

      1. $ oc patch complianceremediations/one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -p '{"spec":{"apply":false}}' --type=merge
    2. Using the scan-name, find the KubeletConfig object that the remediation was applied to:

      1. $ oc get kubeletconfig --selector compliance.openshift.io/scan-name=one-rule-tp-node-master

      Example output

      1. NAME AGE
      2. compliance-operator-kubelet-master 2m34s
    3. Manually remove the remediation, imagefs.available: 10%, from the KubeletConfig object:

      1. $ oc edit KubeletConfig compliance-operator-kubelet-master

      All affected nodes with the remediation will be rebooted.

You must also exclude the rule from any scheduled scans in your tailored profiles that auto-applies the remediation, otherwise, the remediation will be re-applied during the next scheduled scan.

Inconsistent ComplianceScan

The ScanSetting object lists the node roles that the compliance scans generated from the ScanSetting or ScanSettingBinding objects would scan. Each node role usually maps to a machine config pool.

It is expected that all machines in a machine config pool are identical and all scan results from the nodes in a pool should be identical.

If some of the results are different from others, the Compliance Operator flags a ComplianceCheckResult object where some of the nodes will report as INCONSISTENT. All ComplianceCheckResult objects are also labeled with compliance.openshift.io/inconsistent-check.

Because the number of machines in a pool might be quite large, the Compliance Operator attempts to find the most common state and list the nodes that differ from the common state. The most common state is stored in the compliance.openshift.io/most-common-status annotation and the annotation compliance.openshift.io/inconsistent-source contains pairs of hostname:status of check statuses that differ from the most common status. If no common state can be found, all the hostname:status pairs are listed in the compliance.openshift.io/inconsistent-source annotation.

If possible, a remediation is still created so that the cluster can converge to a compliant status. However, this might not always be possible and correcting the difference between nodes must be done manually. The compliance scan must be re-run to get a consistent result by annotating the scan with the compliance.openshift.io/rescan= option:

  1. $ oc annotate compliancescans/<scan_name> compliance.openshift.io/rescan=

Additional resources