Deploying distributed units manually on single-node OpenShift

The procedures in this topic tell you how to manually deploy clusters on a small number of single nodes as a distributed unit (DU) during installation.

The procedures do not describe how to install single-node OpenShift. This can be accomplished through many mechanisms. Rather, they are intended to capture the elements that should be configured as part of the installation process:

  • Networking is needed to enable connectivity to the single-node OpenShift DU when the installation is complete.

  • Workload partitioning, which can only be configured during installation.

  • Additional items that help minimize the potential reboots post installation.

Configuring the distributed units (DUs)

This section describes a set of configurations for an OKD cluster so that it meets the feature and performance requirements necessary for running a distributed unit (DU) application. Some of this content must be applied during installation and other configurations can be applied post-install.

After you have installed the single-node OpenShift DU, further configuration is needed to enable the platform to carry a DU workload.

The configurations in this section are applied to the cluster after installation in order to configure the cluster for DU workloads.

Enabling workload partitioning

A key feature to enable as part of a single-node OpenShift installation is workload partitioning. This limits the cores allowed to run platform services, maximizing the CPU core for application payloads. You must configure workload partitioning at cluster installation time.

You can enable workload partitioning during the cluster installation process only. You cannot disable workload partitioning post-installation. However, you can reconfigure workload partitioning by updating the cpu value that you define in the performanceprofile, and in the MachineConfig CR in the following procedure.

Procedure

  • The base64-encoded content below contains the CPU set that the management workloads are constrained to. This content must be adjusted to match the set specified in the performanceprofile and must be accurate for the number of cores on the cluster.

    1. apiVersion: machineconfiguration.openshift.io/v1
    2. kind: MachineConfig
    3. metadata:
    4. labels:
    5. machineconfiguration.openshift.io/role: master
    6. name: 02-master-workload-partitioning
    7. spec:
    8. config:
    9. ignition:
    10. version: 3.2.0
    11. storage:
    12. files:
    13. - contents:
    14. source: data:text/plain;charset=utf-8;base64,W2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudF0KYWN0aXZhdGlvbl9hbm5vdGF0aW9uID0gInRhcmdldC53b3JrbG9hZC5vcGVuc2hpZnQuaW8vbWFuYWdlbWVudCIKYW5ub3RhdGlvbl9wcmVmaXggPSAicmVzb3VyY2VzLndvcmtsb2FkLm9wZW5zaGlmdC5pbyIKW2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudC5yZXNvdXJjZXNdCmNwdXNoYXJlcyA9IDAKQ1BVcyA9ICIwLTEsIDUyLTUzIgo=
    15. mode: 420
    16. overwrite: true
    17. path: /etc/crio/crio.conf.d/01-workload-partitioning
    18. user:
    19. name: root
    20. - contents:
    21. source: data:text/plain;charset=utf-8;base64,ewogICJtYW5hZ2VtZW50IjogewogICAgImNwdXNldCI6ICIwLTEsNTItNTMiCiAgfQp9Cg==
    22. mode: 420
    23. overwrite: true
    24. path: /etc/kubernetes/openshift-workload-pinning
    25. user:
    26. name: root
  • The contents of /etc/crio/crio.conf.d/01-workload-partitioning should look like this:

    1. [crio.runtime.workloads.management]
    2. activation_annotation = "target.workload.openshift.io/management"
    3. annotation_prefix = "resources.workload.openshift.io"
    4. [crio.runtime.workloads.management.resources]
    5. cpushares = 0
    6. CPUs = "0-1, 52-53" (1)
    1The CPUs value varies based on the installation.

If Hyper-Threading is enabled, specify both threads of each core. The CPUs value must match the reserved CPU set specified in the performance profile.

This content should be base64 encoded and provided in the 01-workload-partitioning-content in the manifest above.

  • The contents of /etc/kubernetes/openshift-workload-pinning should look like this:

    1. {
    2. "management": {
    3. "cpuset": "0-1,52-53" (1)
    4. }
    5. }
    1The cpuset must match the CPUs value in /etc/crio/crio.conf.d/01-workload-partitioning.

This content should be base64 encoded and provided in the openshift-workload-pinning-content in the preceding manifest.

Configuring the container mount namespace

To reduce the overall management footprint of the platform, a machine configuration is provided to contain the mount points. No configuration changes are needed. Use the provided settings:

  1. apiVersion: machineconfiguration.openshift.io/v1
  2. kind: MachineConfig
  3. metadata:
  4. labels:
  5. machineconfiguration.openshift.io/role: master
  6. name: container-mount-namespace-and-kubelet-conf-master
  7. spec:
  8. config:
  9. ignition:
  10. version: 3.2.0
  11. storage:
  12. files:
  13. - contents:
  14. source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo=
  15. mode: 493
  16. path: /usr/local/bin/extractExecStart
  17. - contents:
  18. source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo=
  19. mode: 493
  20. path: /usr/local/bin/nsenterCmns
  21. systemd:
  22. units:
  23. - contents: |
  24. [Unit]
  25. Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts
  26. [Service]
  27. Type=oneshot
  28. RemainAfterExit=yes
  29. RuntimeDirectory=container-mount-namespace
  30. Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace
  31. Environment=BIND_POINT=%t/container-mount-namespace/mnt
  32. ExecStartPre=bash -c "findmnt ${RUNTIME_DIRECTORY} || mount --make-unbindable --bind ${RUNTIME_DIRECTORY} ${RUNTIME_DIRECTORY}"
  33. ExecStartPre=touch ${BIND_POINT}
  34. ExecStart=unshare --mount=${BIND_POINT} --propagation slave mount --make-rshared /
  35. ExecStop=umount -R ${RUNTIME_DIRECTORY}
  36. enabled: true
  37. name: container-mount-namespace.service
  38. - dropins:
  39. - contents: |
  40. [Unit]
  41. Wants=container-mount-namespace.service
  42. After=container-mount-namespace.service
  43. [Service]
  44. ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART
  45. EnvironmentFile=-/%t/%N-execstart.env
  46. ExecStart=
  47. ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \
  48. ${ORIG_EXECSTART}"
  49. name: 90-container-mount-namespace.conf
  50. name: crio.service
  51. - dropins:
  52. - contents: |
  53. [Unit]
  54. Wants=container-mount-namespace.service
  55. After=container-mount-namespace.service
  56. [Service]
  57. ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART
  58. EnvironmentFile=-/%t/%N-execstart.env
  59. ExecStart=
  60. ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \
  61. ${ORIG_EXECSTART} --housekeeping-interval=30s"
  62. name: 90-container-mount-namespace.conf
  63. - contents: |
  64. [Service]
  65. Environment="OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s"
  66. Environment="OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s"
  67. name: 30-kubelet-interval-tuning.conf
  68. name: kubelet.service

Enabling Stream Control Transmission Protocol (SCTP)

SCTP is a key protocol used in RAN applications. This MachineConfig object adds the SCTP kernel module to the node to enable this protocol.

Procedure

  • No configuration changes are needed. Use the provided settings:

    1. apiVersion: machineconfiguration.openshift.io/v1
    2. kind: MachineConfig
    3. metadata:
    4. labels:
    5. machineconfiguration.openshift.io/role: master
    6. name: load-sctp-module
    7. spec:
    8. config:
    9. ignition:
    10. version: 2.2.0
    11. storage:
    12. files:
    13. - contents:
    14. source: data:,
    15. verification: {}
    16. filesystem: root
    17. mode: 420
    18. path: /etc/modprobe.d/sctp-blacklist.conf
    19. - contents:
    20. source: data:text/plain;charset=utf-8,sctp
    21. filesystem: root
    22. mode: 420
    23. path: /etc/modules-load.d/sctp-load.conf

Creating OperatorGroups for Operators

This configuration is provided to enable addition of the Operators needed to configure the platform post-installation. It adds the Namespace and OperatorGroup objects for the Local Storage Operator, Logging Operator, PTP Operator, and SR-IOV Network Operator.

Procedure

  • No configuration changes are needed. Use the provided settings:

    Local Storage Operator

    1. apiVersion: v1
    2. kind: Namespace
    3. metadata:
    4. annotations:
    5. workload.openshift.io/allowed: management
    6. name: openshift-local-storage
    7. ---
    8. apiVersion: operators.coreos.com/v1
    9. kind: OperatorGroup
    10. metadata:
    11. name: openshift-local-storage
    12. namespace: openshift-local-storage
    13. spec:
    14. targetNamespaces:
    15. - openshift-local-storage

    Logging Operator

    1. apiVersion: v1
    2. kind: Namespace
    3. metadata:
    4. annotations:
    5. workload.openshift.io/allowed: management
    6. name: openshift-logging
    7. ---
    8. apiVersion: operators.coreos.com/v1
    9. kind: OperatorGroup
    10. metadata:
    11. name: cluster-logging
    12. namespace: openshift-logging
    13. spec:
    14. targetNamespaces:
    15. - openshift-logging

    PTP Operator

    1. apiVersion: v1
    2. kind: Namespace
    3. metadata:
    4. annotations:
    5. workload.openshift.io/allowed: management
    6. labels:
    7. openshift.io/cluster-monitoring: "true"
    8. name: openshift-ptp
    9. ---
    10. apiVersion: operators.coreos.com/v1
    11. kind: OperatorGroup
    12. metadata:
    13. name: ptp-operators
    14. namespace: openshift-ptp
    15. spec:
    16. targetNamespaces:
    17. - openshift-ptp

    SR-IOV Network Operator

    1. apiVersion: v1
    2. kind: Namespace
    3. metadata:
    4. annotations:
    5. workload.openshift.io/allowed: management
    6. name: openshift-sriov-network-operator
    7. ---
    8. apiVersion: operators.coreos.com/v1
    9. kind: OperatorGroup
    10. metadata:
    11. name: sriov-network-operators
    12. namespace: openshift-sriov-network-operator
    13. spec:
    14. targetNamespaces:
    15. - openshift-sriov-network-operator

Subscribing to the Operators

The subscription provides the location to download the Operators needed for platform configuration.

Procedure

  • Use the following example to configure the subscription:

    1. apiVersion: operators.coreos.com/v1alpha1
    2. kind: Subscription
    3. metadata:
    4. name: cluster-logging
    5. namespace: openshift-logging
    6. spec:
    7. channel: "stable" (1)
    8. name: cluster-logging
    9. source: redhat-operators
    10. sourceNamespace: openshift-marketplace
    11. installPlanApproval: Manual (2)
    12. ---
    13. apiVersion: operators.coreos.com/v1alpha1
    14. kind: Subscription
    15. metadata:
    16. name: local-storage-operator
    17. namespace: openshift-local-storage
    18. spec:
    19. channel: "stable" (3)
    20. installPlanApproval: Automatic
    21. name: local-storage-operator
    22. source: redhat-operators
    23. sourceNamespace: openshift-marketplace
    24. installPlanApproval: Manual
    25. ---
    26. apiVersion: operators.coreos.com/v1alpha1
    27. kind: Subscription
    28. metadata:
    29. name: ptp-operator-subscription
    30. namespace: openshift-ptp
    31. spec:
    32. channel: "stable" (4)
    33. name: ptp-operator
    34. source: redhat-operators
    35. sourceNamespace: openshift-marketplace
    36. installPlanApproval: Manual
    37. ---
    38. apiVersion: operators.coreos.com/v1alpha1
    39. kind: Subscription
    40. metadata:
    41. name: sriov-network-operator-subscription
    42. namespace: openshift-sriov-network-operator
    43. spec:
    44. channel: "stable" (5)
    45. name: sriov-network-operator
    46. source: redhat-operators
    47. sourceNamespace: openshift-marketplace
    48. installPlanApproval: Manual
    1Specify the channel to get the cluster-logging Operator.
    2Specify Manual or Automatic. In Automatic mode, the Operator automatically updates to the latest versions in the channel as they become available in the registry. In Manual mode, new Operator versions are installed only after they are explicitly approved.
    3Specify the channel to get the local-storage-operator Operator.
    4Specify the channel to get the ptp-operator Operator.
    5Specify the channel to get the sriov-network-operator Operator.

Configuring logging locally and forwarding

To be able to debug a single node distributed unit (DU), logs need to be stored for further analysis.

Procedure

  • Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    1. apiVersion: logging.openshift.io/v1
    2. kind: ClusterLogging (1)
    3. metadata:
    4. name: instance
    5. namespace: openshift-logging
    6. spec:
    7. collection:
    8. logs:
    9. fluentd: {}
    10. type: fluentd
    11. curation:
    12. type: "curator"
    13. curator:
    14. schedule: "30 3 * * *"
    15. managementState: Managed
    16. ---
    17. apiVersion: logging.openshift.io/v1
    18. kind: ClusterLogForwarder (2)
    19. metadata:
    20. name: instance
    21. namespace: openshift-logging
    22. spec:
    23. inputs:
    24. - infrastructure: {}
    25. outputs:
    26. - name: kafka-open
    27. type: kafka
    28. url: tcp://10.46.55.190:9092/test (3)
    29. pipelines:
    30. - inputRefs:
    31. - audit
    32. name: audit-logs
    33. outputRefs:
    34. - kafka-open
    35. - inputRefs:
    36. - infrastructure
    37. name: infrastructure-logs
    38. outputRefs:
    39. - kafka-open
    1Updates the existing instance or creates the instance if it does not exist.
    2Updates the existing instance or creates the instance if it does not exist.
    3Specifies the destination of the kafka server.

Configuring the Node Tuning Operator

This is a key configuration for the single node distributed unit (DU). Many of the real-time capabilities and service assurance are configured here.

Procedure

  • Configure the performance profile using the following example:

    1. apiVersion: performance.openshift.io/v2
    2. kind: PerformanceProfile
    3. metadata:
    4. name: perfprofile-policy
    5. spec:
    6. additionalKernelArgs:
    7. - idle=poll
    8. - rcupdate.rcu_normal_after_boot=0
    9. cpu:
    10. isolated: 2-19,22-39 (1)
    11. reserved: 0-1,20-21 (2)
    12. hugepages:
    13. defaultHugepagesSize: 1G
    14. pages:
    15. - count: 32 (3)
    16. size: 1G (4)
    17. machineConfigPoolSelector:
    18. pools.operator.machineconfiguration.openshift.io/master: ""
    19. net:
    20. userLevelNetworking: true (5)
    21. nodeSelector:
    22. node-role.kubernetes.io/master: ""
    23. numa:
    24. topologyPolicy: restricted
    25. realTimeKernel:
    26. enabled: true (6)
1Set the isolated CPUs. Ensure all of the HT pairs match.
2Set the reserved CPUs. In this case, a hyperthreaded pair is allocated on NUMA 0 and a pair on NUMA 1.
3Set the huge page size.
4Set the huge page number.
5Set to true to isolate the CPUs from networking interrupts.
6Set to true to install the real-time Linux kernel.

Configuring Precision Time Protocol (PTP)

In the far edge, the RAN uses PTP to synchronize the systems.

Procedure

  • Configure PTP using the following example:

    1. apiVersion: ptp.openshift.io/v1
    2. kind: PtpConfig
    3. metadata:
    4. name: du-ptp-slave
    5. namespace: openshift-ptp
    6. spec:
    7. profile:
    8. - interface: ens5f0 (1)
    9. name: slave
    10. phc2sysOpts: -a -r -n 24
    11. ptp4lConf: |
    12. [global]
    13. #
    14. # Default Data Set
    15. #
    16. twoStepFlag 1
    17. slaveOnly 0
    18. priority1 128
    19. priority2 128
    20. domainNumber 24
    21. #utc_offset 37
    22. clockClass 248
    23. clockAccuracy 0xFE
    24. offsetScaledLogVariance 0xFFFF
    25. free_running 0
    26. freq_est_interval 1
    27. dscp_event 0
    28. dscp_general 0
    29. dataset_comparison ieee1588
    30. G.8275.defaultDS.localPriority 128
    31. #
    32. # Port Data Set
    33. #
    34. logAnnounceInterval -3
    35. logSyncInterval -4
    36. logMinDelayReqInterval -4
    37. logMinPdelayReqInterval -4
    38. announceReceiptTimeout 3
    39. syncReceiptTimeout 0
    40. delayAsymmetry 0
    41. fault_reset_interval 4
    42. neighborPropDelayThresh 20000000
    43. masterOnly 0
    44. G.8275.portDS.localPriority 128
    45. #
    46. # Run time options
    47. #
    48. assume_two_step 0
    49. logging_level 6
    50. path_trace_enabled 0
    51. follow_up_info 0
    52. hybrid_e2e 0
    53. inhibit_multicast_service 0
    54. net_sync_monitor 0
    55. tc_spanning_tree 0
    56. tx_timestamp_timeout 50
    57. unicast_listen 0
    58. unicast_master_table 0
    59. unicast_req_duration 3600
    60. use_syslog 1
    61. verbose 0
    62. summary_interval 0
    63. kernel_leap 1
    64. check_fup_sync 0
    65. #
    66. # Servo Options
    67. #
    68. pi_proportional_const 0.0
    69. pi_integral_const 0.0
    70. pi_proportional_scale 0.0
    71. pi_proportional_exponent -0.3
    72. pi_proportional_norm_max 0.7
    73. pi_integral_scale 0.0
    74. pi_integral_exponent 0.4
    75. pi_integral_norm_max 0.3
    76. step_threshold 0.0
    77. first_step_threshold 0.00002
    78. max_frequency 900000000
    79. clock_servo pi
    80. sanity_freq_limit 200000000
    81. ntpshm_segment 0
    82. #
    83. # Transport options
    84. #
    85. transportSpecific 0x0
    86. ptp_dst_mac 01:1B:19:00:00:00
    87. p2p_dst_mac 01:80:C2:00:00:0E
    88. udp_ttl 1
    89. udp6_scope 0x0E
    90. uds_address /var/run/ptp4l
    91. #
    92. # Default interface options
    93. #
    94. clock_type OC
    95. network_transport L2
    96. delay_mechanism E2E
    97. time_stamping hardware
    98. tsproc_mode filter
    99. delay_filter moving_median
    100. delay_filter_length 10
    101. egressLatency 0
    102. ingressLatency 0
    103. boundary_clock_jbod 0
    104. #
    105. # Clock description
    106. #
    107. productDescription ;;
    108. revisionData ;;
    109. manufacturerIdentity 00:00:00
    110. userDescription ;
    111. timeSource 0xA0
    112. ptp4lOpts: -2 -s --summary_interval -4
    113. recommend:
    114. - match:
    115. - nodeLabel: node-role.kubernetes.io/master
    116. priority: 4
    117. profile: slave
1Sets the interface used for PTP.

Disabling Network Time Protocol (NTP)

After the system is configured for Precision Time Protocol (PTP), you need to remove NTP to prevent it from impacting the system clock.

Procedure

  • No configuration changes are needed. Use the provided settings:

    1. apiVersion: machineconfiguration.openshift.io/v1
    2. kind: MachineConfig
    3. metadata:
    4. labels:
    5. machineconfiguration.openshift.io/role: master
    6. name: disable-chronyd
    7. spec:
    8. config:
    9. systemd:
    10. units:
    11. - contents: |
    12. [Unit]
    13. Description=NTP client/server
    14. Documentation=man:chronyd(8) man:chrony.conf(5)
    15. After=ntpdate.service sntp.service ntpd.service
    16. Conflicts=ntpd.service systemd-timesyncd.service
    17. ConditionCapability=CAP_SYS_TIME
    18. [Service]
    19. Type=forking
    20. PIDFile=/run/chrony/chronyd.pid
    21. EnvironmentFile=-/etc/sysconfig/chronyd
    22. ExecStart=/usr/sbin/chronyd $OPTIONS
    23. ExecStartPost=/usr/libexec/chrony-helper update-daemon
    24. PrivateTmp=yes
    25. ProtectHome=yes
    26. ProtectSystem=full
    27. [Install]
    28. WantedBy=multi-user.target
    29. enabled: false
    30. name: chronyd.service
    31. ignition:
    32. version: 2.2.0

Configuring single root I/O virtualization (SR-IOV)

SR-IOV is commonly used to enable the fronthaul and the midhaul networks.

Procedure

  • Use the following configuration to configure SRIOV on a single node distributed unit (DU). Note that the first custom resource (CR) is required. The following CRs are examples.

    1. apiVersion: sriovnetwork.openshift.io/v1
    2. kind: SriovOperatorConfig
    3. metadata:
    4. name: default
    5. namespace: openshift-sriov-network-operator
    6. spec:
    7. configDaemonNodeSelector:
    8. node-role.kubernetes.io/master: ""
    9. disableDrain: true
    10. enableInjector: true
    11. enableOperatorWebhook: true
    12. ---
    13. apiVersion: sriovnetwork.openshift.io/v1
    14. kind: SriovNetwork
    15. metadata:
    16. name: sriov-nw-du-mh
    17. namespace: openshift-sriov-network-operator
    18. spec:
    19. networkNamespace: openshift-sriov-network-operator
    20. resourceName: du_mh
    21. vlan: 150 (1)
    22. ---
    23. apiVersion: sriovnetwork.openshift.io/v1
    24. kind: SriovNetworkNodePolicy
    25. metadata:
    26. name: sriov-nnp-du-mh
    27. namespace: openshift-sriov-network-operator
    28. spec:
    29. deviceType: vfio-pci (2)
    30. isRdma: false
    31. nicSelector:
    32. pfNames:
    33. - ens7f0 (3)
    34. nodeSelector:
    35. node-role.kubernetes.io/master: ""
    36. numVfs: 8 (4)
    37. priority: 10
    38. resourceName: du_mh
    39. ---
    40. apiVersion: sriovnetwork.openshift.io/v1
    41. kind: SriovNetwork
    42. metadata:
    43. name: sriov-nw-du-fh
    44. namespace: openshift-sriov-network-operator
    45. spec:
    46. networkNamespace: openshift-sriov-network-operator
    47. resourceName: du_fh
    48. vlan: 140 (5)
    49. ---
    50. apiVersion: sriovnetwork.openshift.io/v1
    51. kind: SriovNetworkNodePolicy
    52. metadata:
    53. name: sriov-nnp-du-fh
    54. namespace: openshift-sriov-network-operator
    55. spec:
    56. deviceType: netdevice (6)
    57. isRdma: true
    58. nicSelector:
    59. pfNames:
    60. - ens5f0 (7)
    61. nodeSelector:
    62. node-role.kubernetes.io/master: ""
    63. numVfs: 8 (8)
    64. priority: 10
    65. resourceName: du_fh
1Specifies the VLAN for the midhaul network.
2Select either vfio-pci or netdevice, as needed.
3Specifies the interface connected to the midhaul network.
4Specifies the number of VFs for the midhaul network.
5The VLAN for the fronthaul network.
6Select either vfio-pci or netdevice, as needed.
7Specifies the interface connected to the fronthaul network.
8Specifies the number of VFs for the fronthaul network.

Disabling the console Operator

The console-operator installs and maintains the web console on a cluster. When the node is centrally managed the Operator is not needed and makes space for application workloads.

Procedure

  • You can disable the Operator using the following configuration file. No configuration changes are needed. Use the provided settings:

    1. apiVersion: operator.openshift.io/v1
    2. kind: Console
    3. metadata:
    4. annotations:
    5. include.release.openshift.io/ibm-cloud-managed: "false"
    6. include.release.openshift.io/self-managed-high-availability: "false"
    7. include.release.openshift.io/single-node-developer: "false"
    8. release.openshift.io/create-only: "true"
    9. name: cluster
    10. spec:
    11. logLevel: Normal
    12. managementState: Removed
    13. operatorLogLevel: Normal

Applying the distributed unit (DU) configuration to a single-node OpenShift cluster

Perform the following tasks to configure a single-node cluster for a DU:

  • Apply the required extra installation manifests at installation time.

  • Apply the post-install configuration custom resources (CRs).

Applying the extra installation manifests

To apply the distributed unit (DU) configuration to the single-node cluster, the following extra installation manifests need to be included during installation:

  • Enable workload partitioning.

  • Other MachineConfig objects – There is a set of MachineConfig custom resources (CRs) included by default. You can choose to include these additional MachineConfig CRs that are unique to their environment. It is recommended, but not required, to apply these CRs during installation in order to minimize the number of reboots that can occur during post-install configuration.

Applying the post-install configuration custom resources (CRs)

  • After OKD is installed on the cluster, use the following command to apply the CRs you configured for the distributed units (DUs):
  1. $ oc apply -f <file_name>.yaml