Logging using LokiStack
In logging subsystem documentation, LokiStack refers to the logging subsystem supported combination of Loki and web proxy with OKD authentication integration. LokiStack’s proxy uses OKD authentication to enforce multi-tenancy. Loki refers to the log store as either the individual component or an external store.
Loki is a horizontally scalable, highly available, multi-tenant log aggregation system currently offered as an alternative to Elasticsearch as a log store for the logging subsystem. Elasticsearch indexes incoming log records completely during ingestion. Loki only indexes a few fixed labels during ingestion and defers more complex parsing until after the logs have been stored. This means Loki can collect logs more quickly. You can query Loki by using the LogQL log query language.
Deployment Sizing
Sizing for Loki follows the format of N<x>._<size>_
where the value <N>
is number of instances and <size>
specifies performance capabilities.
1x.extra-small is for demo purposes only, and is not supported. |
1x.extra-small | 1x.small | 1x.medium | |
---|---|---|---|
Data transfer | Demo use only. | 500GB/day | 2TB/day |
Queries per second (QPS) | Demo use only. | 25-50 QPS at 200ms | 25-75 QPS at 200ms |
Replication factor | None | 2 | 3 |
Total CPU requests | 5 vCPUs | 36 vCPUs | 54 vCPUs |
Total Memory requests | 7.5Gi | 63Gi | 139Gi |
Total Disk requests | 150Gi | 300Gi | 450Gi |
Supported API Custom Resource Definitions
LokiStack development is ongoing, not all APIs are supported currently supported.
CustomResourceDefinition (CRD) | ApiVersion | Support state |
---|---|---|
LokiStack | lokistack.loki.grafana.com/v1 | Supported in 5.5 |
RulerConfig | rulerconfig.loki.grafana/v1beta1 | Technology Preview |
AlertingRule | alertingrule.loki.grafana/v1beta1 | Technology Preview |
RecordingRule | recordingrule.loki.grafana/v1beta1 | Technology Preview |
Usage of For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
Deploying the LokiStack
You can use the OKD web console to deploy the LokiStack.
Prerequisites
Logging subsystem for Red Hat OpenShift Operator 5.5 and later
Supported Log Store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation)
Procedure
Install the
Loki Operator
Operator:In the OKD web console, click Operators → OperatorHub.
Choose Loki Operator from the list of available Operators, and click Install.
Under Installation Mode, select All namespaces on the cluster.
Under Installed Namespace, select openshift-operators-redhat.
You must specify the
openshift-operators-redhat
namespace. Theopenshift-operators
namespace might contain Community Operators, which are untrusted and might publish a metric with the same name as an OKD metric, which would cause conflicts.Select Enable operator recommended cluster monitoring on this namespace.
This option sets the
openshift.io/cluster-monitoring: "true"
label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes theopenshift-operators-redhat
namespace.Select an Approval Strategy.
The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
The Manual strategy requires a user with appropriate credentials to approve the Operator update.
Click Install.
Verify that you installed the Loki Operator. Visit the Operators → Installed Operators page and look for Loki Operator.
Ensure that Loki Operator is listed with Status as Succeeded in all the projects.
Create a
Secret
YAML file that uses theaccess_key_id
andaccess_key_secret
fields to specify your AWS credentials andbucketnames
,endpoint
andregion
to define the object storage location. For example:apiVersion: v1
kind: Secret
metadata:
name: logging-loki-s3
namespace: openshift-logging
stringData:
access_key_id: AKIAIOSFODNN7EXAMPLE
access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
bucketnames: s3-bucket-name
endpoint: https://s3.eu-central-1.amazonaws.com
region: eu-central-1
Create the
LokiStack
custom resource (CR):apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
name: logging-loki
namespace: openshift-logging
spec:
size: 1x.small
storage:
schemas:
- version: v12
effectiveDate: "2022-06-01"
secret:
name: logging-loki-s3
type: s3
storageClassName: gp3-csi (1)
tenants:
mode: openshift-logging
1 Or gp2-csi
.Apply the
LokiStack
CR:$ oc apply -f logging-loki.yaml
Create a
ClusterLogging
custom resource (CR):apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
managementState: Managed
logStore:
type: lokistack
lokistack:
name: logging-loki
collection:
type: vector
Apply the
ClusterLogging
CR:$ oc apply -f cr-lokistack.yaml
Enable the RedHat OpenShift Logging Console Plugin:
In the OKD web console, click Operators → Installed Operators.
Select the RedHat OpenShift Logging Operator.
Under Console plugin, click Disabled.
Select Enable and then Save. This change restarts the
openshift-console
pods.After the pods restart, you will receive a notification that a web console update is available, prompting you to refresh.
After refreshing the web console, click Observe from the left main menu. A new option for Logs is available.
Enabling stream-based retention with Loki
With Logging version 5.6 and higher, you can configure retention policies based on log streams. Rules for these may be set globally, per tenant, or both. If you configure both, tenant rules apply before global rules.
To enable stream-based retention, create a
LokiStack
custom resource (CR):Example global stream-based retention
apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
name: logging-loki
namespace: openshift-logging
spec:
limits:
global: (1)
retention: (2)
days: 20
streams:
- days: 4
priority: 1
selector: '{kubernetes_namespace_name=~"test.+"}' (3)
- days: 1
priority: 1
selector: '{log_type="infrastructure"}'
managementState: Managed
replicationFactor: 1
size: 1x.small
storage:
schemas:
- effectiveDate: "2020-10-11"
version: v11
secret:
name: logging-loki-s3
type: aws
storageClassName: standard
tenants:
mode: openshift-logging
1 Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage. 2 Retention is enabled in the cluster when this block is added to the CR. 3 Contains the LogQL query used to define the log stream. Example per-tenant stream-based retention
apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
name: logging-loki
namespace: openshift-logging
spec:
limits:
global:
retention:
days: 20
tenants: (1)
application:
retention:
days: 1
streams:
- days: 4
selector: '{kubernetes_namespace_name=~"test.+"}' (2)
infrastructure:
retention:
days: 5
streams:
- days: 1
selector: '{kubernetes_namespace_name=~"openshift-cluster.+"}'
managementState: Managed
replicationFactor: 1
size: 1x.small
storage:
schemas:
- effectiveDate: "2020-10-11"
version: v11
secret:
name: logging-loki-s3
type: aws
storageClassName: standard
tenants:
mode: openshift-logging
1 Sets retention policy by tenant. Valid tenant types are application
,audit
, andinfrastructure
.2 Contains the LogQL query used to define the log stream. Apply the
LokiStack
CR:$ oc apply -f <filename>.yaml
This is not for managing the retention for stored logs. Global retention periods for stored logs to a supported maximum of 30 days is configured with your object storage. |
Forwarding logs to LokiStack
To configure log forwarding to the LokiStack gateway, you must create a ClusterLogging
custom resource (CR).
Prerequisites
The Logging subsystem for Red Hat OpenShift version 5.5 or newer is installed on your cluster.
The Loki Operator is installed on your cluster.
Procedure
Create a
ClusterLogging
custom resource (CR):apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
managementState: Managed
logStore:
type: lokistack
lokistack:
name: logging-loki
collection:
type: vector
Troubleshooting Loki rate limit errors
If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit (429
) errors.
These errors can occur during normal operation. For example, when adding the logging subsystem to a cluster that already has some logs, rate limit errors might occur while the logging subsystem tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention.
In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack
custom resource (CR).
The |
Conditions
The Log Forwarder API is configured to forward logs to Loki.
Your system sends a block of messages that is larger than 2 MB to Loki. For example:
"values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\
.......
......
......
......
\"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]}
After you enter
oc logs -n openshift-logging -l component=collector
, the collector logs in your cluster show a line containing one of the following error messages:429 Too Many Requests Ingestion rate limit exceeded
Example Vector error message
2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true
Example Fluentd error message
2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk="604251225bf5378ed1567231a1c03b8b" error_class=Fluent::Plugin::LokiOutput::LogPostError error="429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\n"
The error is also visible on the receiving end. For example, in the LokiStack ingester pod:
Example Loki ingester error message
level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream
Procedure
Update the
ingestionBurstSize
andingestionRate
fields in theLokiStack
CR:apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
name: logging-loki
namespace: openshift-logging
spec:
limits:
global:
ingestion:
ingestionBurstSize: 16 (1)
ingestionRate: 8 (2)
# ...
1 The ingestionBurstSize
field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than theingestionBurstSize
value are not permitted.2 The ingestionRate
field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention.