Moving logging subsystem resources with node selectors
You can use node selectors to deploy the Elasticsearch and Kibana pods to different nodes.
Moving OpenShift Logging resources
You can configure the Cluster Logging Operator to deploy the pods for logging subsystem components, such as Elasticsearch and Kibana, to different nodes. You cannot move the Cluster Logging Operator pod from its installed location.
For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements.
Prerequisites
- The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. These features are not installed by default.
Procedure
Edit the
ClusterLogging
custom resource (CR) in theopenshift-logging
project:$ oc edit ClusterLogging instance
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
...
spec:
collection:
logs:
fluentd:
resources: null
type: fluentd
logStore:
elasticsearch:
nodeCount: 3
nodeSelector: (1)
node-role.kubernetes.io/infra: ''
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/infra
value: reserved
- effect: NoExecute
key: node-role.kubernetes.io/infra
value: reserved
redundancyPolicy: SingleRedundancy
resources:
limits:
cpu: 500m
memory: 16Gi
requests:
cpu: 500m
memory: 16Gi
storage: {}
type: elasticsearch
managementState: Managed
visualization:
kibana:
nodeSelector: (1)
node-role.kubernetes.io/infra: ''
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/infra
value: reserved
- effect: NoExecute
key: node-role.kubernetes.io/infra
value: reserved
proxy:
resources: null
replicas: 1
resources: null
type: kibana
...
1 Add a nodeSelector
parameter with the appropriate value to the component you want to move. You can use anodeSelector
in the format shown or use<key>: <value>
pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration.
Verification
To verify that a component has moved, you can use the oc get pod -o wide
command.
For example:
You want to move the Kibana pod from the
ip-10-0-147-79.us-east-2.compute.internal
node:$ oc get pod kibana-5b8bdf44f9-ccpq9 -o wide
Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>
You want to move the Kibana pod to the
ip-10-0-139-48.us-east-2.compute.internal
node, a dedicated infrastructure node:$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION
ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.26.0
ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.26.0
ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.26.0
ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.26.0
ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.26.0
ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.26.0
ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.26.0
Note that the node has a
node-role.kubernetes.io/infra: ''
label:$ oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml
Example output
kind: Node
apiVersion: v1
metadata:
name: ip-10-0-139-48.us-east-2.compute.internal
selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal
uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751
resourceVersion: '39083'
creationTimestamp: '2020-04-13T19:07:55Z'
labels:
node-role.kubernetes.io/infra: ''
...
To move the Kibana pod, edit the
ClusterLogging
CR to add a node selector:apiVersion: logging.openshift.io/v1
kind: ClusterLogging
...
spec:
...
visualization:
kibana:
nodeSelector: (1)
node-role.kubernetes.io/infra: ''
proxy:
resources: null
replicas: 1
resources: null
type: kibana
1 Add a node selector to match the label in the node specification. After you save the CR, the current Kibana pod is terminated and new pod is deployed:
$ oc get pods
Example output
NAME READY STATUS RESTARTS AGE
cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m
elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m
elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m
elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m
fluentd-42dzz 1/1 Running 0 28m
fluentd-d74rq 1/1 Running 0 28m
fluentd-m5vr9 1/1 Running 0 28m
fluentd-nkxl7 1/1 Running 0 28m
fluentd-pdvqb 1/1 Running 0 28m
fluentd-tflh6 1/1 Running 0 28m
kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s
kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s
The new pod is on the
ip-10-0-139-48.us-east-2.compute.internal
node:$ oc get pod kibana-7d85dcffc8-bfpfp -o wide
Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>
After a few moments, the original Kibana pod is removed.
$ oc get pods
Example output
NAME READY STATUS RESTARTS AGE
cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m
elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m
elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m
elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m
fluentd-42dzz 1/1 Running 0 29m
fluentd-d74rq 1/1 Running 0 29m
fluentd-m5vr9 1/1 Running 0 29m
fluentd-nkxl7 1/1 Running 0 29m
fluentd-pdvqb 1/1 Running 0 29m
fluentd-tflh6 1/1 Running 0 29m
kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s