Updating OpenShift Logging
- Updating from cluster logging in OKD 4.6 or earlier to OpenShift Logging 5.x
- Updating OpenShift Logging to the current version
4.7 | 4.8 | 4.9 | |
---|---|---|---|
RHOL 5.1 | X | X | |
RHOL 5.2 | X | X | X |
RHOL 5.3 | X | X |
To upgrade from cluster logging in OKD version 4.6 and earlier to OpenShift Logging 5.x, you update the OKD cluster to version 4.7 or 4.8. Then, you update the following operators:
From Elasticsearch Operator 4.x to OpenShift Elasticsearch Operator 5.x
From Cluster Logging Operator 4.x to Red Hat OpenShift Logging Operator 5.x
To upgrade from a previous version of OpenShift Logging to the current version, you update OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator to their current versions.
Updating from cluster logging in OKD 4.6 or earlier to OpenShift Logging 5.x
OKD 4.7 made the following name changes:
The cluster logging feature became the Red Hat OpenShift Logging 5.x product.
The Cluster Logging Operator became the Red Hat OpenShift Logging Operator.
The Elasticsearch Operator became OpenShift Elasticsearch Operator.
To upgrade from cluster logging in OKD version 4.6 and earlier to OpenShift Logging 5.x, you update the OKD cluster to version 4.7, 4.8, or 4.9. Then, you update the following operators:
From Elasticsearch Operator 4.x to OpenShift Elasticsearch Operator 5.x
From Cluster Logging Operator 4.x to Red Hat OpenShift Logging Operator 5.x
You must update the OpenShift Elasticsearch Operator before you update the Red Hat OpenShift Logging Operator. You must also update both Operators to the same version. |
If you update the operators in the wrong order, Kibana does not update and the Kibana custom resource (CR) is not created. To work around this problem, you delete the Red Hat OpenShift Logging Operator pod. When the Red Hat OpenShift Logging Operator pod redeploys, it creates the Kibana CR and Kibana becomes available again.
Prerequisites
The OKD version is 4.7 or later.
The OpenShift Logging status is healthy:
All pods are
ready
.The Elasticsearch cluster is healthy.
Your Elasticsearch and Kibana data is backed up.
Procedure
Update the OpenShift Elasticsearch Operator:
From the web console, click Operators → Installed Operators.
Select the
openshift-operators-redhat
project.Click the OpenShift Elasticsearch Operator.
Click Subscription → Channel.
In the Change Subscription Update Channel window, select 5.0 or stable-5.x and click Save.
Wait for a few seconds, then click Operators → Installed Operators.
Verify that the OpenShift Elasticsearch Operator version is 5.x.x.
Wait for the Status field to report Succeeded.
Update the Cluster Logging Operator:
From the web console, click Operators → Installed Operators.
Select the
openshift-logging
project.Click the Cluster Logging Operator.
Click Subscription → Channel.
In the Change Subscription Update Channel window, select 5.0 or stable-5.x and click Save.
Wait for a few seconds, then click Operators → Installed Operators.
Verify that the Red Hat OpenShift Logging Operator version is 5.0.x or 5.x.x.
Wait for the Status field to report Succeeded.
Check the logging components:
Ensure that all Elasticsearch pods are in the Ready status:
$ oc get pod -n openshift-logging --selector component=elasticsearch
Example output
NAME READY STATUS RESTARTS AGE
elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m
elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m
elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m
Ensure that the Elasticsearch cluster is healthy:
$ oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health
{
"cluster_name" : "elasticsearch",
"status" : "green",
}
Ensure that the Elasticsearch cron jobs are created:
$ oc project openshift-logging
$ oc get cronjob
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
elasticsearch-im-app */15 * * * * False 0 <none> 56s
elasticsearch-im-audit */15 * * * * False 0 <none> 56s
elasticsearch-im-infra */15 * * * * False 0 <none> 56s
Verify that the log store is updated to 5.0 or 5.x and the indices are
green
:$ oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices
Verify that the output includes the
app-00000x
,infra-00000x
,audit-00000x
,.security
indices.Sample output with indices in a green status
Tue Jun 30 14:30:54 UTC 2020
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144
green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148
green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147
green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0
green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158
green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168
green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146
green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145
green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0
green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148
green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148
green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147
green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0
green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0
green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147
green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220
green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0
green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146
green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57
green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9
green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148
green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148
green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0
Verify that the log collector is updated to 5.0 or 5.x:
$ oc get ds fluentd -o json | grep fluentd-init
Verify that the output includes a
fluentd-init
container:"containerName": "fluentd-init"
Verify that the log visualizer is updated to 5.0 or 5.x using the Kibana CRD:
$ oc get kibana kibana -o json
Verify that the output includes a Kibana pod with the
ready
status:Sample output with a ready Kibana pod
[
{
"clusterCondition": {
"kibana-5fdd766ffd-nb2jj": [
{
"lastTransitionTime": "2020-06-30T14:11:07Z",
"reason": "ContainerCreating",
"status": "True",
"type": ""
},
{
"lastTransitionTime": "2020-06-30T14:11:07Z",
"reason": "ContainerCreating",
"status": "True",
"type": ""
}
]
},
"deployment": "kibana",
"pods": {
"failed": [],
"notReady": []
"ready": []
},
"replicaSets": [
"kibana-5fdd766ffd"
],
"replicas": 1
}
]
Updating OpenShift Logging to the current version
To update OpenShift Logging from 5.x to the current version, you change the subscriptions for the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator.
You must update the OpenShift Elasticsearch Operator before you update the Red Hat OpenShift Logging Operator. You must also update both Operators to the same version. |
If you update the operators in the wrong order, Kibana does not update and the Kibana custom resource (CR) is not created. To work around this problem, you delete the Red Hat OpenShift Logging Operator pod. When the Red Hat OpenShift Logging Operator pod redeploys, it creates the Kibana CR and Kibana becomes available again.
Prerequisites
The OKD version is 4.7 or later.
The OpenShift Logging status is healthy:
All pods are
ready
.The Elasticsearch cluster is healthy.
Your Elasticsearch and Kibana data is backed up.
Procedure
Update the OpenShift Elasticsearch Operator:
From the web console, click Operators → Installed Operators.
Select the
openshift-operators-redhat
project.Click the OpenShift Elasticsearch Operator.
Click Subscription → Channel.
In the Change Subscription Update Channel window, select stable-5.x and click Save.
Wait for a few seconds, then click Operators → Installed Operators.
Verify that the OpenShift Elasticsearch Operator version is 5.x.x.
Wait for the Status field to report Succeeded.
Update the Red Hat OpenShift Logging Operator:
From the web console, click Operators → Installed Operators.
Select the
openshift-logging
project.Click the Red Hat OpenShift Logging Operator.
Click Subscription → Channel.
In the Change Subscription Update Channel window, select stable-5.x and click Save.
Wait for a few seconds, then click Operators → Installed Operators.
Verify that the Red Hat OpenShift Logging Operator version is 5.x.x.
Wait for the Status field to report Succeeded.
Check the logging components:
Ensure that all Elasticsearch pods are in the Ready status:
$ oc get pod -n openshift-logging --selector component=elasticsearch
Example output
NAME READY STATUS RESTARTS AGE
elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m
elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m
elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m
Ensure that the Elasticsearch cluster is healthy:
$ oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health
{
"cluster_name" : "elasticsearch",
"status" : "green",
}
Ensure that the Elasticsearch cron jobs are created:
$ oc project openshift-logging
$ oc get cronjob
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
elasticsearch-im-app */15 * * * * False 0 <none> 56s
elasticsearch-im-audit */15 * * * * False 0 <none> 56s
elasticsearch-im-infra */15 * * * * False 0 <none> 56s
Verify that the log store is updated to 5.x and the indices are
green
:$ oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices
Verify that the output includes the
app-00000x
,infra-00000x
,audit-00000x
,.security
indices.Sample output with indices in a green status
Tue Jun 30 14:30:54 UTC 2020
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144
green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148
green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147
green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0
green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158
green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168
green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146
green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145
green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0
green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148
green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148
green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147
green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0
green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0
green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147
green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220
green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0
green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146
green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57
green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9
green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148
green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148
green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0
Verify that the log collector is updated to 5.x:
$ oc get ds fluentd -o json | grep fluentd-init
Verify that the output includes a
fluentd-init
container:"containerName": "fluentd-init"
Verify that the log visualizer is updated to 5.x using the Kibana CRD:
$ oc get kibana kibana -o json
Verify that the output includes a Kibana pod with the
ready
status:Sample output with a ready Kibana pod
[
{
"clusterCondition": {
"kibana-5fdd766ffd-nb2jj": [
{
"lastTransitionTime": "2020-06-30T14:11:07Z",
"reason": "ContainerCreating",
"status": "True",
"type": ""
},
{
"lastTransitionTime": "2020-06-30T14:11:07Z",
"reason": "ContainerCreating",
"status": "True",
"type": ""
}
]
},
"deployment": "kibana",
"pods": {
"failed": [],
"notReady": []
"ready": []
},
"replicaSets": [
"kibana-5fdd766ffd"
],
"replicas": 1
}
]