Moving logging subsystem resources with node selectors
You can use node selectors to deploy the Elasticsearch and Kibana pods to different nodes.
Moving logging subsystem resources
You can configure the Red Hat OpenShift Logging Operator to deploy the pods for logging subsystem components, such as Elasticsearch and Kibana, to different nodes. You cannot move the Red Hat OpenShift Logging Operator pod from its installed location.
For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements.
Prerequisites
- You have installed the Red Hat OpenShift Logging Operator and the OpenShift Elasticsearch Operator.
Procedure
Edit the
ClusterLogging
custom resource (CR) in theopenshift-logging
project:$ oc edit ClusterLogging instance
Example
ClusterLogging
CRapiVersion: logging.openshift.io/v1
kind: ClusterLogging
# ...
spec:
logStore:
elasticsearch:
nodeCount: 3
nodeSelector: (1)
node-role.kubernetes.io/infra: ''
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/infra
value: reserved
- effect: NoExecute
key: node-role.kubernetes.io/infra
value: reserved
redundancyPolicy: SingleRedundancy
resources:
limits:
cpu: 500m
memory: 16Gi
requests:
cpu: 500m
memory: 16Gi
storage: {}
type: elasticsearch
managementState: Managed
visualization:
kibana:
nodeSelector: (1)
node-role.kubernetes.io/infra: ''
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/infra
value: reserved
- effect: NoExecute
key: node-role.kubernetes.io/infra
value: reserved
proxy:
resources: null
replicas: 1
resources: null
type: kibana
# ...
1 Add a nodeSelector
parameter with the appropriate value to the component you want to move. You can use anodeSelector
in the format shown or use<key>: <value>
pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration.
Verification
To verify that a component has moved, you can use the oc get pod -o wide
command.
For example:
You want to move the Kibana pod from the
ip-10-0-147-79.us-east-2.compute.internal
node:$ oc get pod kibana-5b8bdf44f9-ccpq9 -o wide
Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>
You want to move the Kibana pod to the
ip-10-0-139-48.us-east-2.compute.internal
node, a dedicated infrastructure node:$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION
ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.28.5
ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.28.5
ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.28.5
ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.28.5
ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.28.5
ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.28.5
ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.28.5
Note that the node has a
node-role.kubernetes.io/infra: ''
label:$ oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml
Example output
kind: Node
apiVersion: v1
metadata:
name: ip-10-0-139-48.us-east-2.compute.internal
selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal
uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751
resourceVersion: '39083'
creationTimestamp: '2020-04-13T19:07:55Z'
labels:
node-role.kubernetes.io/infra: ''
...
To move the Kibana pod, edit the
ClusterLogging
CR to add a node selector:apiVersion: logging.openshift.io/v1
kind: ClusterLogging
# ...
spec:
# ...
visualization:
kibana:
nodeSelector: (1)
node-role.kubernetes.io/infra: ''
proxy:
resources: null
replicas: 1
resources: null
type: kibana
1 Add a node selector to match the label in the node specification. After you save the CR, the current Kibana pod is terminated and new pod is deployed:
$ oc get pods
Example output
NAME READY STATUS RESTARTS AGE
cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m
elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m
elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m
elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m
collector-42dzz 1/1 Running 0 28m
collector-d74rq 1/1 Running 0 28m
collector-m5vr9 1/1 Running 0 28m
collector-nkxl7 1/1 Running 0 28m
collector-pdvqb 1/1 Running 0 28m
collector-tflh6 1/1 Running 0 28m
kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s
kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s
The new pod is on the
ip-10-0-139-48.us-east-2.compute.internal
node:$ oc get pod kibana-7d85dcffc8-bfpfp -o wide
Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>
After a few moments, the original Kibana pod is removed.
$ oc get pods
Example output
NAME READY STATUS RESTARTS AGE
cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m
elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m
elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m
elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m
collector-42dzz 1/1 Running 0 29m
collector-d74rq 1/1 Running 0 29m
collector-m5vr9 1/1 Running 0 29m
collector-nkxl7 1/1 Running 0 29m
collector-pdvqb 1/1 Running 0 29m
collector-tflh6 1/1 Running 0 29m
kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s