- Installing log storage
- Deploying a Loki log store
- Loki deployment sizing
- Installing the Loki Operator by using the OKD web console
- Creating a secret for Loki object storage by using the web console
- Creating a LokiStack custom resource by using the web console
- Installing Loki Operator by using the CLI
- Creating a secret for Loki object storage by using the CLI
- Creating a LokiStack custom resource by using the CLI
- Loki object storage
- Deploying an Elasticsearch log store
- Configuring log storage
- Deploying a Loki log store
Installing log storage
You can use the OpenShift CLI (oc
) or the OKD web console to deploy a log store on your OKD cluster.
The OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. |
Deploying a Loki log store
You can use the Loki Operator to deploy an internal Loki log store on your OKD cluster. After install the Loki Operator, you must configure Loki object storage by creating a secret, and create a LokiStack
custom resource (CR).
Loki deployment sizing
Sizing for Loki follows the format of <N>x.<size>
where the value <N>
is number of instances and <size>
specifies performance capabilities.
1x.demo | 1x.extra-small | 1x.small | 1x.medium | |
---|---|---|---|---|
Data transfer | Demo use only | 100GB/day | 500GB/day | 2TB/day |
Queries per second (QPS) | Demo use only | 1-25 QPS at 200ms | 25-50 QPS at 200ms | 25-75 QPS at 200ms |
Replication factor | None | 2 | 2 | 2 |
Total CPU requests | None | 14 vCPUs | 34 vCPUs | 54 vCPUs |
Total CPU requests if using the ruler | None | 16 vCPUs | 42 vCPUs | 70 vCPUs |
Total memory requests | None | 31Gi | 67Gi | 139Gi |
Total memory requests if using the ruler | None | 35Gi | 83Gi | 171Gi |
Total disk requests | 40Gi | 430Gi | 430Gi | 590Gi |
Total disk requests if using the ruler | 80Gi | 750Gi | 750Gi | 910Gi |
Installing the Loki Operator by using the OKD web console
To install and configure logging on your OKD cluster, additional Operators must be installed. This can be done from the Operator Hub within the web console.
OKD Operators use custom resources (CR) to manage applications and their components. High-level configuration and settings are provided by the user within a CR. The Operator translates high-level directives into low-level actions, based on best practices embedded within the Operator’s logic. A custom resource definition (CRD) defines a CR and lists all the configurations available to users of the Operator. Installing an Operator creates the CRDs, which are then used to generate CRs.
Prerequisites
You have access to a supported object store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation).
You have administrator permissions.
You have access to the OKD web console.
Procedure
In the OKD web console Administrator perspective, go to Operators → OperatorHub.
Type Loki Operator in the Filter by keyword field. Click Loki Operator in the list of available Operators, and then click Install.
The Community Loki Operator is not supported by Red Hat.
Select stable or stable-x.y as the Update channel.
The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y, where
x.y
represents the major and minor version of logging you have installed. For example, stable-5.7.The Loki Operator must be deployed to the global operator group namespace
openshift-operators-redhat
, so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it is created for you.Select Enable operator-recommended cluster monitoring on this namespace.
This option sets the
openshift.io/cluster-monitoring: "true"
label in theNamespace
object. You must select this option to ensure that cluster monitoring scrapes theopenshift-operators-redhat
namespace.For Update approval select Automatic, then click Install.
If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates.
Verification
Go to Operators → Installed Operators.
Make sure the openshift-logging project is selected.
In the Status column, verify that you see green checkmarks with InstallSucceeded and the text Up to date.
An Operator might display a |
Creating a secret for Loki object storage by using the web console
To configure Loki object storage, you must create a secret. You can create a secret by using the OKD web console.
Prerequisites
You have administrator permissions.
You have access to the OKD web console.
You installed the Loki Operator.
Procedure
Go to Workloads → Secrets in the Administrator perspective of the OKD web console.
From the Create drop-down list, select From YAML.
Create a secret that uses the
access_key_id
andaccess_key_secret
fields to specify your credentials and thebucketnames
,endpoint
, andregion
fields to define the object storage location. AWS is used in the following example:Example
Secret
objectapiVersion: v1
kind: Secret
metadata:
name: logging-loki-s3
namespace: openshift-logging
stringData:
access_key_id: AKIAIOSFODNN7EXAMPLE
access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
bucketnames: s3-bucket-name
endpoint: https://s3.eu-central-1.amazonaws.com
region: eu-central-1
Additional resources
Creating a LokiStack custom resource by using the web console
You can create a LokiStack
custom resource (CR) by using the OKD web console.
Prerequisites
You have administrator permissions.
You have access to the OKD web console.
You installed the Loki Operator.
Procedure
Go to the Operators → Installed Operators page. Click the All instances tab.
From the Create new drop-down list, select LokiStack.
Select YAML view, and then use the following template to create a
LokiStack
CR:apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
name: logging-loki (1)
namespace: openshift-logging
spec:
size: 1x.small (2)
storage:
schemas:
- version: v12
effectiveDate: '2022-06-01'
secret:
name: logging-loki-s3 (3)
type: s3 (4)
storageClassName: <storage_class_name> (5)
tenants:
mode: openshift-logging
1 Use the name logging-loki
.2 Specify the deployment size. In the logging subsystem 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small
,1x.small
, or1x.medium
.3 Specify the secret used for your log storage. 4 Specify the corresponding storage type. 5 Enter the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using the oc get storageclasses
command.
Installing Loki Operator by using the CLI
To install and configure logging on your OKD cluster, additional Operators must be installed. This can be done from the OKD CLI.
OKD Operators use custom resources (CR) to manage applications and their components. High-level configuration and settings are provided by the user within a CR. The Operator translates high-level directives into low-level actions, based on best practices embedded within the Operator’s logic. A custom resource definition (CRD) defines a CR and lists all the configurations available to users of the Operator. Installing an Operator creates the CRDs, which are then used to generate CRs.
Prerequisites
You have administrator permissions.
You installed the OpenShift CLI (
oc
).You have access to a supported object store. For example: AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation.
Procedure
Create a
Subscription
object:apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: loki-operator
namespace: openshift-operators-redhat (1)
spec:
charsion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: loki-operator
namespace: openshift-operators-redhat (1)
spec:
channel: stable (2)
name: loki-operator
source: redhat-operators (3)
sourceNamespace: openshift-marketplace
1 You must specify the openshift-operators-redhat
namespace.2 Specify stable
, orstable-5.<y>
as the channel.3 Specify redhat-operators
. If your OKD cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of theCatalogSource
object you created when you configured the Operator Lifecycle Manager (OLM).Apply the
Subscription
object:$ oc apply -f <filename>.yaml
Creating a secret for Loki object storage by using the CLI
To configure Loki object storage, you must create a secret. You can do this by using the OpenShift CLI (oc
).
Prerequisites
You have administrator permissions.
You installed the Loki Operator.
You installed the OpenShift CLI (
oc
).
Procedure
Create a secret in the directory that contains your certificate and key files by running the following command:
$ oc create secret generic -n openshift-logging <your_secret_name> \
--from-file=tls.key=<your_key_file>
--from-file=tls.crt=<your_crt_file>
--from-file=ca-bundle.crt=<your_bundle_file>
--from-literal=username=<your_username>
--from-literal=password=<your_password>
Use generic or opaque secrets for best results. |
Verification
Verify that a secret was created by running the following command:
$ oc get secrets
Additional resources
Creating a LokiStack custom resource by using the CLI
You can create a LokiStack
custom resource (CR) by using the OpenShift CLI (oc
).
Prerequisites
You have administrator permissions.
You installed the Loki Operator.
You installed the OpenShift CLI (
oc
).
Procedure
Create a
LokiStack
CR:Example
LokiStack
CRapiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
name: logging-loki
namespace: openshift-logging
spec:
size: 1x.small (1)
storage:
schemas:
- version: v12
effectiveDate: "2022-06-01"
secret:
name: logging-loki-s3 (2)
type: s3 (3)
storageClassName: <storage_class_name> (4)
tenants:
mode: openshift-logging
1 Specify the deployment size. In the logging subsystem 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small
,1x.small
, or1x.medium
.2 Specify the name of your log store secret. 3 Specify the type of your log store secret. 4 Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using the oc get storageclasses
command.Apply the
LokiStack
CR:$ oc apply -f <filename>.yaml
Verification
Verify the installation by listing the pods in the
openshift-logging
project by running the following command and observing the output:$ oc get pods -n openshift-logging
Confirm that you see several pods for components of the logging subsystem, similar to the following list:
Example output
NAME READY STATUS RESTARTS AGE
cluster-logging-operator-78fddc697-mnl82 1/1 Running 0 14m
collector-6cglq 2/2 Running 0 45s
collector-8r664 2/2 Running 0 45s
collector-8z7px 2/2 Running 0 45s
collector-pdxl9 2/2 Running 0 45s
collector-tc9dx 2/2 Running 0 45s
collector-xkd76 2/2 Running 0 45s
logging-loki-compactor-0 1/1 Running 0 8m2s
logging-loki-distributor-b85b7d9fd-25j9g 1/1 Running 0 8m2s
logging-loki-distributor-b85b7d9fd-xwjs6 1/1 Running 0 8m2s
logging-loki-gateway-7bb86fd855-hjhl4 2/2 Running 0 8m2s
logging-loki-gateway-7bb86fd855-qjtlb 2/2 Running 0 8m2s
logging-loki-index-gateway-0 1/1 Running 0 8m2s
logging-loki-index-gateway-1 1/1 Running 0 7m29s
logging-loki-ingester-0 1/1 Running 0 8m2s
logging-loki-ingester-1 1/1 Running 0 6m46s
logging-loki-querier-f5cf9cb87-9fdjd 1/1 Running 0 8m2s
logging-loki-querier-f5cf9cb87-fp9v5 1/1 Running 0 8m2s
logging-loki-query-frontend-58c579fcb7-lfvbc 1/1 Running 0 8m2s
logging-loki-query-frontend-58c579fcb7-tjf9k 1/1 Running 0 8m2s
logging-view-plugin-79448d8df6-ckgmx 1/1 Running 0 46s
Loki object storage
The Loki Operator supports AWS S3, as well as other S3 compatible object stores such as Minio and OpenShift Data Foundation. Azure, GCS, and Swift are also supported.
The recommended nomenclature for Loki storage is logging-loki-_<your_storage_provider>_
.
The following table shows the type
values within the LokiStack
custom resource (CR) for each storage provider. For more information, see the section on your storage provider.
Storage provider | Secret type value |
---|---|
AWS | s3 |
Azure | azure |
Google Cloud | gcs |
Minio | s3 |
OpenShift Data Foundation | s3 |
Swift | swift |
AWS storage
Prerequisites
You installed the Loki Operator.
You installed the OpenShift CLI (
oc
).You created a bucket on AWS.
You created an AWS IAM Policy and IAM User.
Procedure
Create an object storage secret with the name
logging-loki-aws
by running the following command:$ oc create secret generic logging-loki-aws \
--from-literal=bucketnames="<bucket_name>" \
--from-literal=endpoint="<aws_bucket_endpoint>" \
--from-literal=access_key_id="<aws_access_key_id>" \
--from-literal=access_key_secret="<aws_access_key_secret>" \
--from-literal=region="<aws_region_of_your_bucket>"
Azure storage
Prerequisites
You installed the Loki Operator.
You installed the OpenShift CLI (
oc
).You created a bucket on Azure.
Procedure
Create an object storage secret with the name
logging-loki-azure
by running the following command:$ oc create secret generic logging-loki-azure \
--from-literal=container="<azure_container_name>" \
--from-literal=environment="<azure_environment>" \ (1)
--from-literal=account_name="<azure_account_name>" \
--from-literal=account_key="<azure_account_key>"
1 Supported environment values are AzureGlobal
,AzureChinaCloud
,AzureGermanCloud
, orAzureUSGovernment
.
Google Cloud Platform storage
Prerequisites
You installed the Loki Operator.
You installed the OpenShift CLI (
oc
).You created a project on Google Cloud Platform (GCP).
You created a bucket in the same project.
You created a service account in the same project for GCP authentication.
Procedure
Copy the service account credentials received from GCP into a file called
key.json
.Create an object storage secret with the name
logging-loki-gcs
by running the following command:$ oc create secret generic logging-loki-gcs \
--from-literal=bucketname="<bucket_name>" \
--from-file=key.json="<path/to/key.json>"
Minio storage
Prerequisites
You installed the Loki Operator.
You installed the OpenShift CLI (
oc
).You have Minio deployed on your cluster.
You created a bucket on Minio.
Procedure
Create an object storage secret with the name
logging-loki-minio
by running the following command:$ oc create secret generic logging-loki-minio \
--from-literal=bucketnames="<bucket_name>" \
--from-literal=endpoint="<minio_bucket_endpoint>" \
--from-literal=access_key_id="<minio_access_key_id>" \
--from-literal=access_key_secret="<minio_access_key_secret>"
OpenShift Data Foundation storage
Prerequisites
You installed the Loki Operator.
You installed the OpenShift CLI (
oc
).You deployed OpenShift Data Foundation.
You configured your OpenShift Data Foundation cluster for object storage.
Procedure
Create an
ObjectBucketClaim
custom resource in theopenshift-logging
namespace:apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: loki-bucket-odf
namespace: openshift-logging
spec:
generateBucketName: loki-bucket-odf
storageClassName: openshift-storage.noobaa.io
Get bucket properties from the associated
ConfigMap
object by running the following command:BUCKET_HOST=$(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_HOST}')
BUCKET_NAME=$(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_NAME}')
BUCKET_PORT=$(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_PORT}')
Get bucket access key from the associated secret by running the following command:
ACCESS_KEY_ID=$(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 -d)
SECRET_ACCESS_KEY=$(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 -d)
Create an object storage secret with the name
logging-loki-odf
by running the following command:$ oc create -n openshift-logging secret generic logging-loki-odf \
--from-literal=access_key_id="<access_key_id>" \
--from-literal=access_key_secret="<secret_access_key>" \
--from-literal=bucketnames="<bucket_name>" \
--from-literal=endpoint="https://<bucket_host>:<bucket_port>"
Swift storage
Prerequisites
You installed the Loki Operator.
You installed the OpenShift CLI (
oc
).You created a bucket on Swift.
Procedure
Create an object storage secret with the name
logging-loki-swift
by running the following command:$ oc create secret generic logging-loki-swift \
--from-literal=auth_url="<swift_auth_url>" \
--from-literal=username="<swift_usernameclaim>" \
--from-literal=user_domain_name="<swift_user_domain_name>" \
--from-literal=user_domain_id="<swift_user_domain_id>" \
--from-literal=user_id="<swift_user_id>" \
--from-literal=password="<swift_password>" \
--from-literal=domain_id="<swift_domain_id>" \
--from-literal=domain_name="<swift_domain_name>" \
--from-literal=container_name="<swift_container_name>"
You can optionally provide project-specific data, region, or both by running the following command:
$ oc create secret generic logging-loki-swift \
--from-literal=auth_url="<swift_auth_url>" \
--from-literal=username="<swift_usernameclaim>" \
--from-literal=user_domain_name="<swift_user_domain_name>" \
--from-literal=user_domain_id="<swift_user_domain_id>" \
--from-literal=user_id="<swift_user_id>" \
--from-literal=password="<swift_password>" \
--from-literal=domain_id="<swift_domain_id>" \
--from-literal=domain_name="<swift_domain_name>" \
--from-literal=container_name="<swift_container_name>" \
--from-literal=project_id="<swift_project_id>" \
--from-literal=project_name="<swift_project_name>" \
--from-literal=project_domain_id="<swift_project_domain_id>" \
--from-literal=project_domain_name="<swift_project_domain_name>" \
--from-literal=region="<swift_region>"
Deploying an Elasticsearch log store
You can use the OpenShift Elasticsearch Operator to deploy an internal Elasticsearch log store on your OKD cluster.
The OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. |
Storage considerations for Elasticsearch
A persistent volume is required for each Elasticsearch deployment configuration. On OKD this is achieved using persistent volume claims (PVCs).
If you use a local volume for persistent storage, do not use a raw block volume, which is described with |
The OpenShift Elasticsearch Operator names the PVCs using the Elasticsearch resource name.
Fluentd ships any logs from systemd journal and /var/log/containers/*.log to Elasticsearch.
Elasticsearch requires sufficient memory to perform large merge operations. If it does not have enough memory, it becomes unresponsive. To avoid this problem, evaluate how much application log data you need, and allocate approximately double that amount of free storage capacity.
By default, when storage capacity is 85% full, Elasticsearch stops allocating new data to the node. At 90%, Elasticsearch attempts to relocate existing shards from that node to other nodes if possible. But if no nodes have a free capacity below 85%, Elasticsearch effectively rejects creating new indices and becomes RED.
These low and high watermark values are Elasticsearch defaults in the current release. You can modify these default values. Although the alerts use the same default values, you cannot change these values in the alerts. |
Installing the OpenShift Elasticsearch Operator by using the web console
The OpenShift Elasticsearch Operator creates and manages the Elasticsearch cluster used by OpenShift Logging.
Prerequisites
Elasticsearch is a memory-intensive application. Each Elasticsearch node needs at least 16GB of memory for both memory requests and limits, unless you specify otherwise in the
ClusterLogging
custom resource.The initial set of OKD nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the OKD cluster to run with the recommended or higher memory, up to a maximum of 64GB for each Elasticsearch node.
Elasticsearch nodes can operate with a lower memory setting, though this is not recommended for production environments.
Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node requires its own storage volume.
If you use a local volume for persistent storage, do not use a raw block volume, which is described with
volumeMode: block
in theLocalVolume
object. Elasticsearch cannot use raw block volumes.
Procedure
In the OKD web console, click Operators → OperatorHub.
Click OpenShift Elasticsearch Operator from the list of available Operators, and click Install.
Ensure that the All namespaces on the cluster is selected under Installation mode.
Ensure that openshift-operators-redhat is selected under Installed Namespace.
You must specify the
openshift-operators-redhat
namespace. Theopenshift-operators
namespace might contain Community Operators, which are untrusted and could publish a metric with the same name as OKD metric, which would cause conflicts.Select Enable operator recommended cluster monitoring on this namespace.
This option sets the
openshift.io/cluster-monitoring: "true"
label in theNamespace
object. You must select this option to ensure that cluster monitoring scrapes theopenshift-operators-redhat
namespace.Select stable-5.x as the Update channel.
Select an Update approval strategy:
The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
The Manual strategy requires a user with appropriate credentials to approve the Operator update.
Click Install.
Verification
Verify that the OpenShift Elasticsearch Operator installed by switching to the Operators → Installed Operators page.
Ensure that OpenShift Elasticsearch Operator is listed in all projects with a Status of Succeeded.
Installing the OpenShift Elasticsearch Operator by using the CLI
You can use the OpenShift CLI (oc
) to install the OpenShift Elasticsearch Operator.
Prerequisites
Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node requires its own storage volume.
If you use a local volume for persistent storage, do not use a raw block volume, which is described with
volumeMode: block
in theLocalVolume
object. Elasticsearch cannot use raw block volumes.Elasticsearch is a memory-intensive application. By default, OKD installs three Elasticsearch nodes with memory requests and limits of 16 GB. This initial set of three OKD nodes might not have enough memory to run Elasticsearch within your cluster. If you experience memory issues that are related to Elasticsearch, add more Elasticsearch nodes to your cluster rather than increasing the memory on existing nodes.
Ensure that you have downloaded the pull secret from the Red Hat OpenShift Cluster Manager as shown in “Obtaining the installation program” in the installation documentation for your platform.
If you have the pull secret, add the
redhat-operators
catalog to theOperatorHub
custom resource (CR) as shown in Configuring OKD to use Red Hat Operators.You have administrator permissions.
You have installed the OpenShift CLI (
oc
).
Procedure
Create a
Namespace
object as a YAML file:apiVersion: v1
kind: Namespace
metadata:
name: openshift-operators-redhat (1)
annotations:
openshift.io/node-selector: ""
labels:
openshift.io/cluster-monitoring: "true" (2)
1 You must specify the openshift-operators-redhat
namespace. To prevent possible conflicts with metrics, configure the Prometheus Cluster Monitoring stack to scrape metrics from theopenshift-operators-redhat
namespace and not theopenshift-operators
namespace. Theopenshift-operators
namespace might contain community Operators, which are untrusted and could publish a metric with the same name as metric, which would cause conflicts.2 String. You must specify this label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat
namespace.Apply the
Namespace
object by running the following command:$ oc apply -f <filename>.yaml
Create an
OperatorGroup
object as a YAML file:apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-operators-redhat
namespace: openshift-operators-redhat (1)
spec: {}
1 You must specify the openshift-operators-redhat
namespace.Apply the
OperatorGroup
object by running the following command:$ oc apply -f <filename>.yaml
Create a
Subscription
object to subscribe the namespace to the OpenShift Elasticsearch Operator:Example Subscription
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: elasticsearch-operator
namespace: openshift-operators-redhat (1)
spec:
channel: stable-x.y (2)
installPlanApproval: Automatic (3)
source: redhat-operators (4)
sourceNamespace: openshift-marketplace
name: elasticsearch-operator
1 You must specify the openshift-operators-redhat
namespace.2 Specify stable
, orstable-x.y
as the channel. See the following note.3 Automatic
allows the Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.Manual
requires a user with appropriate credentials to approve the Operator update.4 Specify redhat-operators
. If your OKD cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of theCatalogSource
object created when you configured the Operator Lifecycle Manager (OLM).Specifying
stable
installs the current version of the latest stable release. Usingstable
withinstallPlanApproval: “Automatic”
automatically upgrades your Operators to the latest stable major and minor release.Specifying
stable-x.y
installs the current minor version of a specific major release. Usingstable-x.y
withinstallPlanApproval: “Automatic”
automatically upgrades your Operators to the latest stable minor release within the major release.Apply the subscription by running the following command:
$ oc apply -f <filename>.yaml
The OpenShift Elasticsearch Operator is installed to the
openshift-operators-redhat
namespace and copied to each project in the cluster.
Verification
Run the following command:
$ oc get csv -n --all-namespaces
Observe the output and confirm that pods for the OpenShift Elasticsearch Operator exist in each namespace
Example output
NAMESPACE NAME DISPLAY VERSION REPLACES PHASE
default elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded
kube-node-lease elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded
kube-public elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded
kube-system elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded
non-destructive-test elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded
openshift-apiserver-operator elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded
openshift-apiserver elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded
...
Configuring log storage
You can configure which log storage type your logging subsystem uses by modifying the ClusterLogging
custom resource (CR).
Prerequisites
You have administrator permissions.
You have installed the OpenShift CLI (
oc
).You have installed the Red Hat OpenShift Logging Operator and an internal log store that is either the LokiStack or Elasticsearch.
You have created a
ClusterLogging
CR.
The OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. |
Procedure
Modify the
ClusterLogging
CRlogStore
spec:ClusterLogging
CR exampleapiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
# ...
spec:
# ...
logStore:
type: <log_store_type> (1)
elasticsearch: (2)
nodeCount: <integer>
resources: {}
storage: {}
redundancyPolicy: <redundancy_type> (3)
lokistack: (4)
name: {}
# ...
1 Specify the log store type. This can be either lokistack
orelasticsearch
.2 Optional configuration options for the Elasticsearch log store. 3 Specify the redundancy type. This value can be ZeroRedundancy
,SingleRedundancy
,MultipleRedundancy
, orFullRedundancy
.4 Optional configuration options for LokiStack. Example
ClusterLogging
CR to specify LokiStack as the log storeapiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
managementState: Managed
logStore:
type: lokistack
lokistack:
name: logging-loki
# ...
Apply the
ClusterLogging
CR by running the following command:$ oc apply -f <filename>.yaml