- Forwarding logs to third-party systems
- About forwarding logs to third-party systems
- Supported log data output types
- Forwarding logs to an external Elasticsearch instance
- Forwarding logs using the Fluentd forward protocol
- Forwarding logs using the syslog protocol
- Forwarding logs to a Kafka broker
- Forwarding application logs from specific projects
- Forwarding application logs from specific pods
- Forwarding logs using the legacy Fluentd method
- Forwarding logs using the legacy syslog method
Forwarding logs to third-party systems
By default, OpenShift Logging sends container and infrastructure logs to the default internal Elasticsearch log store defined in the ClusterLogging
custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, you do not need to configure the Cluster Log Forwarder.
To send logs to other log aggregators, you use the OKD Cluster Log Forwarder. This API enables you to send container, infrastructure, and audit logs to specific endpoints within or outside your cluster. In addition, you can send different types of logs to various systems so that various individuals can access each type. You can also enable Transport Layer Security (TLS) support to send logs securely, as required by your organization.
To send audit logs to the internal log store, use the Cluster Log Forwarder as described in Forward audit logs to the log store. |
When you forward logs externally, the Red Hat OpenShift Logging Operator creates or modifies a Fluentd config map to send logs using your desired protocols. You are responsible for configuring the protocol on the external log aggregator.
Alternatively, you can create a config map to use the Fluentd forward protocol or the syslog protocol to send logs to external systems. However, these methods for forwarding logs are deprecated in OKD and will be removed in a future release.
You cannot use the config map methods and the Cluster Log Forwarder in the same cluster. |
About forwarding logs to third-party systems
Forwarding cluster logs to external third-party systems requires a combination of outputs and pipelines specified in a ClusterLogForwarder
custom resource (CR) to send logs to specific endpoints inside and outside of your OKD cluster. You can also use inputs to forward the application logs associated with a specific project to an endpoint.
An output is the destination for log data that you define, or where you want the logs sent. An output can be one of the following types:
elasticsearch
. An external Elasticsearch 6 (all releases) instance. Theelasticsearch
output can use a TLS connection.fluentdForward
. An external log aggregation solution that supports Fluentd. This option uses the Fluentd forward protocols. ThefluentForward
output can use a TCP or TLS connection and supports shared-key authentication by providing a shared_key field in a secret. Shared-key authentication can be used with or without TLS.syslog
. An external log aggregation solution that supports the syslog RFC3164 or RFC5424 protocols. Thesyslog
output can use a UDP, TCP, or TLS connection.kafka
. A Kafka broker. Thekafka
output can use a TCP or TLS connection.default
. The internal OKD Elasticsearch instance. You are not required to configure the default output. If you do configure adefault
output, you receive an error message because thedefault
output is reserved for the Red Hat OpenShift Logging Operator.
If the output URL scheme requires TLS (HTTPS, TLS, or UDPS), then TLS server-side authentication is enabled. To also enable client authentication, the output must name a secret in the
openshift-logging
project. The secret must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent.A pipeline defines simple routing from one log type to one or more outputs, or which logs you want to send. The log types are one of the following:
application
. Container logs generated by user applications running in the cluster, except infrastructure container applications.infrastructure
. Container logs from pods that run in theopenshift*
,kube*
, ordefault
projects and journal logs sourced from node file system.audit
. Logs generated by auditd, the node audit system, and the audit logs from the Kubernetes API server and the OpenShift API server.
You can add labels to outbound log messages by using
key:value
pairs in the pipeline. For example, you might add a label to messages that are forwarded to others data centers or label the logs by type. Labels that are added to objects are also forwarded with the log message.An input forwards the application logs associated with a specific project to a pipeline.
In the pipeline, you define which log types to forward using an inputRef
parameter and where to forward the logs to using an outputRef
parameter.
Note the following:
If a
ClusterLogForwarder
CR object exists, logs are not forwarded to the default Elasticsearch instance, unless there is a pipeline with thedefault
output.By default, OpenShift Logging sends container and infrastructure logs to the default internal Elasticsearch log store defined in the
ClusterLogging
custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, do not configure the Log Forwarding API.If you do not define a pipeline for a log type, the logs of the undefined types are dropped. For example, if you specify a pipeline for the
application
andaudit
types, but do not specify a pipeline for theinfrastructure
type,infrastructure
logs are dropped.You can use multiple types of outputs in the
ClusterLogForwarder
custom resource (CR) to send logs to servers that support different protocols.The internal OKD Elasticsearch instance does not provide secure storage for audit logs. We recommend you ensure that the system to which you forward audit logs is compliant with your organizational and governmental regulations and is properly secured. OpenShift Logging does not comply with those regulations.
You are responsible for creating and maintaining any additional configurations that external destinations might require, such as keys and secrets, service accounts, port openings, or global proxy configuration.
The following example forwards the audit logs to a secure external Elasticsearch instance, the infrastructure logs to an insecure external Elasticsearch instance, the application logs to a Kafka broker, and the application logs from the my-apps-logs
project to the internal Elasticsearch instance.
Sample log forwarding outputs and pipelines
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
metadata:
name: instance (1)
namespace: openshift-logging (2)
spec:
outputs:
- name: elasticsearch-secure (3)
type: "elasticsearch"
url: https://elasticsearch.secure.com:9200
secret:
name: elasticsearch
- name: elasticsearch-insecure (4)
type: "elasticsearch"
url: http://elasticsearch.insecure.com:9200
- name: kafka-app (5)
type: "kafka"
url: tls://kafka.secure.com:9093/app-topic
inputs: (6)
- name: my-app-logs
application:
namespaces:
- my-project
pipelines:
- name: audit-logs (7)
inputRefs:
- audit
outputRefs:
- elasticsearch-secure
- default
parse: json (8)
labels:
secure: "true" (9)
datacenter: "east"
- name: infrastructure-logs (10)
inputRefs:
- infrastructure
outputRefs:
- elasticsearch-insecure
labels:
datacenter: "west"
- name: my-app (11)
inputRefs:
- my-app-logs
outputRefs:
- default
- inputRefs: (12)
- application
outputRefs:
- kafka-app
labels:
datacenter: "south"
1 | The name of the ClusterLogForwarder CR must be instance . |
2 | The namespace for the ClusterLogForwarder CR must be openshift-logging . |
3 | Configuration for an secure Elasticsearch output using a secret with a secure URL.
|
4 | Configuration for an insecure Elasticsearch output:
|
5 | Configuration for a Kafka output using a client-authenticated TLS communication over a secure URL
|
6 | Configuration for an input to filter application logs from the my-namespace project. |
7 | Configuration for a pipeline to send audit logs to the secure external Elasticsearch instance:
|
8 | Optional: Forward structured JSON log entries as JSON objects in the structured field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the structured field and instead sends the log entry to the default index, app-00000x . |
9 | Optional: String. One or more labels to add to the logs. Quote values like “true” so they are recognized as string values, not as a boolean. |
10 | Configuration for a pipeline to send infrastructure logs to the insecure external Elasticsearch instance. |
11 | Configuration for a pipeline to send logs from the my-project project to the internal Elasticsearch instance.
|
12 | Configuration for a pipeline to send logs to the Kafka broker, with no pipeline name:
|
Fluentd log handling when the external log aggregator is unavailable
If your external logging aggregator becomes unavailable and cannot receive logs, Fluentd continues to collect logs and stores them in a buffer. When the log aggregator becomes available, log forwarding resumes, including the buffered logs. If the buffer fills completely, Fluentd stops collecting logs. OKD rotates the logs and deletes them. You cannot adjust the buffer size or add a persistent volume claim (PVC) to the Fluentd daemon set or pods.
Supported log data output types
Red Hat OpenShift Logging 5.0 provides the following output types and protocols for sending log data to target log collectors.
Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range target log collectors that ingest these protocols.
Output types | Protocols | Tested with |
---|---|---|
fluentdForward | fluentd forward v1 | fluentd 1.7.4 logstash 7.10.1 |
elasticsearch | elasticsearch | Elasticsearch 6.8.1 Elasticsearch 7.10.1 |
syslog | RFC-3164, RFC-5424 | rsyslog 8.37.0-9.el7 |
kafka | kafka 0.11 | kafka 2.4.1 |
Previously, the syslog output supported only RFC-3164. The current syslog output adds support for RFC-5424. |
Forwarding logs to an external Elasticsearch instance
You can optionally forward logs to an external Elasticsearch instance in addition to, or instead of, the internal OKD Elasticsearch instance. You are responsible for configuring the external log aggregator to receive log data from OKD.
To configure log forwarding to an external Elasticsearch instance, create a ClusterLogForwarder
custom resource (CR) with an output to that instance and a pipeline that uses the output. The external Elasticsearch output can use the HTTP (insecure) or HTTPS (secure HTTP) connection.
To forward logs to both an external and the internal Elasticsearch instance, create outputs and pipelines to the external instance and a pipeline that uses the default
output to forward logs to the internal instance. You do not need to create a default
output. If you do configure a default
output, you receive an error message because the default
output is reserved for the Red Hat OpenShift Logging Operator.
If you want to forward logs to only the internal OKD Elasticsearch instance, you do not need to create a |
Prerequisites
- You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Procedure
Create a
ClusterLogForwarder
CR YAML file similar to the following:apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
metadata:
name: instance (1)
namespace: openshift-logging (2)
spec:
outputs:
- name: elasticsearch-insecure (3)
type: "elasticsearch" (4)
url: http://elasticsearch.insecure.com:9200 (5)
- name: elasticsearch-secure
type: "elasticsearch"
url: https://elasticsearch.secure.com:9200
secret:
name: es-secret (6)
pipelines:
- name: application-logs (7)
inputRefs: (8)
- application
- audit
outputRefs:
- elasticsearch-secure (9)
- default (10)
parse: json (11)
labels:
myLabel: "myValue" (12)
- name: infrastructure-audit-logs (13)
inputRefs:
- infrastructure
outputRefs:
- elasticsearch-insecure
labels:
logs: "audit-infra"
1 The name of the ClusterLogForwarder
CR must beinstance
.2 The namespace for the ClusterLogForwarder
CR must beopenshift-logging
.3 Specify a name for the output. 4 Specify the elasticsearch
type.5 Specify the URL and port of the external Elasticsearch instance as a valid absolute URL. You can use the http
(insecure) orhttps
(secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address.6 If using an https
prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in theopenshift-logging
project and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent.7 Optional: Specify a name for the pipeline. 8 Specify which log types should be forwarded using that pipeline: application,
infrastructure
, oraudit
.9 Specify the output to use with that pipeline for forwarding the logs. 10 Optional: Specify the default
output to send the logs to the internal Elasticsearch instance.11 Optional: Forward structured JSON log entries as JSON objects in the structured
field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes thestructured
field and instead sends the log entry to the default index,app-00000x
.12 Optional: String. One or more labels to add to the logs. 13 Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type: Optional. A name to describe the pipeline.
The
inputRefs
is the log type to forward using that pipeline:application,
infrastructure
, oraudit
.The
outputRefs
is the name of the output to use.Optional: String. One or more labels to add to the logs.
Create the CR object:
$ oc create -f <file-name>.yaml
The Red Hat OpenShift Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd pods to force them to redeploy.
$ oc delete pod --selector logging-infra=fluentd
Forwarding logs using the Fluentd forward protocol
You can use the Fluentd forward protocol to send a copy of your logs to an external log aggregator configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator to receive the logs from OKD.
To configure log forwarding using the forward protocol, create a ClusterLogForwarder
custom resource (CR) with one or more outputs to the Fluentd servers and pipelines that use those outputs. The Fluentd output can use a TCP (insecure) or TLS (secure TCP) connection.
Alternately, you can use a config map to forward logs using the forward protocols. However, this method is deprecated in OKD and will be removed in a future release. |
Prerequisites
- You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Procedure
Create a
ClusterLogForwarder
CR YAML file similar to the following:apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance (1)
namespace: openshift-logging (2)
spec:
outputs:
- name: fluentd-server-secure (3)
type: fluentdForward (4)
url: 'tls://fluentdserver.security.example.com:24224' (5)
secret: (6)
name: fluentd-secret
- name: fluentd-server-insecure
type: fluentdForward
url: 'tcp://fluentdserver.home.example.com:24224'
pipelines:
- name: forward-to-fluentd-secure (7)
inputRefs: (8)
- application
- audit
outputRefs:
- fluentd-server-secure (9)
- default (10)
parse: json (11)
labels:
clusterId: "C1234" (12)
- name: forward-to-fluentd-insecure (13)
inputRefs:
- infrastructure
outputRefs:
- fluentd-server-insecure
labels:
clusterId: "C1234"
1 The name of the ClusterLogForwarder
CR must beinstance
.2 The namespace for the ClusterLogForwarder
CR must beopenshift-logging
.3 Specify a name for the output. 4 Specify the fluentdForward
type.5 Specify the URL and port of the external Fluentd instance as a valid absolute URL. You can use the tcp
(insecure) ortls
(secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.6 If using a tls
prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in theopenshift-logging
project and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent.7 Optional. Specify a name for the pipeline. 8 Specify which log types should be forwarded using that pipeline: application,
infrastructure
, oraudit
.9 Specify the output to use with that pipeline for forwarding the logs. 10 Optional. Specify the default
output to forward logs to the internal Elasticsearch instance.11 Optional: Forward structured JSON log entries as JSON objects in the structured
field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes thestructured
field and instead sends the log entry to the default index,app-00000x
.12 Optional: String. One or more labels to add to the logs. 13 Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type: Optional. A name to describe the pipeline.
The
inputRefs
is the log type to forward using that pipeline:application,
infrastructure
, oraudit
.The
outputRefs
is the name of the output to use.Optional: String. One or more labels to add to the logs.
Create the CR object:
$ oc create -f <file-name>.yaml
The Red Hat OpenShift Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd pods to force them to redeploy.
$ oc delete pod --selector logging-infra=fluentd
Forwarding logs using the syslog protocol
You can use the syslog RFC3164 or RFC5424 protocol to send a copy of your logs to an external log aggregator configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from OKD.
To configure log forwarding using the syslog protocol, create a ClusterLogForwarder
custom resource (CR) with one or more outputs to the syslog servers and pipelines that use those outputs. The syslog output can use a UDP, TCP, or TLS connection.
Alternately, you can use a config map to forward logs using the syslog RFC3164 protocols. However, this method is deprecated in OKD and will be removed in a future release. |
Prerequisites
- You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Procedure
Create a
ClusterLogForwarder
CR YAML file similar to the following:apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance (1)
namespace: openshift-logging (2)
spec:
outputs:
- name: rsyslog-east (3)
type: syslog (4)
syslog: (5)
facility: local0
rfc: RFC3164
payloadKey: message
severity: informational
url: 'tls://rsyslogserver.east.example.com:514' (6)
secret: (7)
name: syslog-secret
- name: rsyslog-west
type: syslog
syslog:
appName: myapp
facility: user
msgID: mymsg
procID: myproc
rfc: RFC5424
severity: debug
url: 'udp://rsyslogserver.west.example.com:514'
pipelines:
- name: syslog-east (8)
inputRefs: (9)
- audit
- application
outputRefs: (10)
- rsyslog-east
- default (11)
parse: json (12)
labels:
secure: "true" (13)
syslog: "east"
- name: syslog-west (14)
inputRefs:
- infrastructure
outputRefs:
- rsyslog-west
- default
labels:
syslog: "west"
1 The name of the ClusterLogForwarder
CR must beinstance
.2 The namespace for the ClusterLogForwarder
CR must beopenshift-logging
.3 Specify a name for the output. 4 Specify the syslog
type.5 Optional. Specify the syslog parameters, listed below. 6 Specify the URL and port of the external syslog instance. You can use the udp
(insecure),tcp
(insecure) ortls
(secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.7 If using a tls
prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in theopenshift-logging
project and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent.8 Optional: Specify a name for the pipeline. 9 Specify which log types should be forwarded using that pipeline: application,
infrastructure
, oraudit
.10 Specify the output to use with that pipeline for forwarding the logs. 11 Optional: Specify the default
output to forward logs to the internal Elasticsearch instance.12 Optional: Forward structured JSON log entries as JSON objects in the structured
field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes thestructured
field and instead sends the log entry to the default index,app-00000x
.13 Optional: String. One or more labels to add to the logs. Quote values like “true” so they are recognized as string values, not as a boolean. 14 Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type: Optional. A name to describe the pipeline.
The
inputRefs
is the log type to forward using that pipeline:application,
infrastructure
, oraudit
.The
outputRefs
is the name of the output to use.Optional: String. One or more labels to add to the logs.
Create the CR object:
$ oc create -f <file-name>.yaml
The Red Hat OpenShift Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd pods to force them to redeploy.
$ oc delete pod --selector logging-infra=fluentd
Syslog parameters
You can configure the following for the syslog
outputs. For more information, see the syslog RFC3164 or RFC5424 RFC.
facility: The syslog facility. The value can be a decimal integer or a case-insensitive keyword:
0
orkern
for kernel messages1
oruser
for user-level messages, the default.2
ormail
for the mail system3
ordaemon
for system daemons4
orauth
for security/authentication messages5
orsyslog
for messages generated internally by syslogd6
orlpr
for the line printer subsystem7
ornews
for the network news subsystem8
oruucp
for the UUCP subsystem9
orcron
for the clock daemon10
orauthpriv
for security authentication messages11
orftp
for the FTP daemon12
orntp
for the NTP subsystem13
orsecurity
for the syslog audit log14
orconsole
for the syslog alert log15
orsolaris-cron
for the scheduling daemon16
–23
orlocal0
–local7
for locally used facilities
Optional.
payloadKey
: The record field to use as payload for the syslog message.Configuring the
payloadKey
parameter prevents other parameters from being forwarded to the syslog.rfc: The RFC to be used for sending logs using syslog. The default is RFC5424.
severity: The syslog severity to set on outgoing syslog records. The value can be a decimal integer or a case-insensitive keyword:
0
orEmergency
for messages indicating the system is unusable1
orAlert
for messages indicating action must be taken immediately2
orCritical
for messages indicating critical conditions3
orError
for messages indicating error conditions4
orWarning
for messages indicating warning conditions5
orNotice
for messages indicating normal but significant conditions6
orInformational
for messages indicating informational messages7
orDebug
for messages indicating debug-level messages, the default
tag: Tag specifies a record field to use as a tag on the syslog message.
trimPrefix: Remove the specified prefix from the tag.
Additional RFC5424 syslog parameters
The following parameters apply to RFC5424:
appName: The APP-NAME is a free-text string that identifies the application that sent the log. Must be specified for
RFC5424
.msgID: The MSGID is a free-text string that identifies the type of message. Must be specified for
RFC5424
.procID: The PROCID is a free-text string. A change in the value indicates a discontinuity in syslog reporting. Must be specified for
RFC5424
.
Forwarding logs to a Kafka broker
You can forward logs to an external Kafka broker in addition to, or instead of, the default Elasticsearch log store.
To configure log forwarding to an external Kafka instance, create a ClusterLogForwarder
custom resource (CR) with an output to that instance and a pipeline that uses the output. You can include a specific Kafka topic in the output or use the default. The Kafka output can use a TCP (insecure) or TLS (secure TCP) connection.
Procedure
Create a
ClusterLogForwarder
CR YAML file similar to the following:apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance (1)
namespace: openshift-logging (2)
spec:
outputs:
- name: app-logs (3)
type: kafka (4)
url: tls://kafka.example.devlab.com:9093/app-topic (5)
secret:
name: kafka-secret (6)
- name: infra-logs
type: kafka
url: tcp://kafka.devlab2.example.com:9093/infra-topic (7)
- name: audit-logs
type: kafka
url: tls://kafka.qelab.example.com:9093/audit-topic
secret:
name: kafka-secret-qe
pipelines:
- name: app-topic (8)
inputRefs: (9)
- application
outputRefs: (10)
- app-logs
parse: json (11)
labels:
logType: "application" (12)
- name: infra-topic (13)
inputRefs:
- infrastructure
outputRefs:
- infra-logs
labels:
logType: "infra"
- name: audit-topic
inputRefs:
- audit
outputRefs:
- audit-logs
- default (14)
labels:
logType: "audit"
1 The name of the ClusterLogForwarder
CR must beinstance
.2 The namespace for the ClusterLogForwarder
CR must beopenshift-logging
.3 Specify a name for the output. 4 Specify the kafka
type.5 Specify the URL and port of the Kafka broker as a valid absolute URL, optionally with a specific topic. You can use the tcp
(insecure) ortls
(secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.6 If using a tls
prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in theopenshift-logging
project and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent.7 Optional: To send an insecure output, use a tcp
prefix in front of the URL. Also omit thesecret
key and itsname
from this output.8 Optional: Specify a name for the pipeline. 9 Specify which log types should be forwarded using that pipeline: application,
infrastructure
, oraudit
.10 Specify the output to use with that pipeline for forwarding the logs. 11 Optional: Forward structured JSON log entries as JSON objects in the structured
field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes thestructured
field and instead sends the log entry to the default index,app-00000x
.12 Optional: String. One or more labels to add to the logs. 13 Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type: Optional. A name to describe the pipeline.
The
inputRefs
is the log type to forward using that pipeline:application,
infrastructure
, oraudit
.The
outputRefs
is the name of the output to use.Optional: String. One or more labels to add to the logs.
14 Optional: Specify default
to forward logs to the internal Elasticsearch instance.Optional: To forward a single output to multiple Kafka brokers, specify an array of Kafka brokers as shown in this example:
...
spec:
outputs:
- name: app-logs
type: kafka
secret:
name: kafka-secret-dev
kafka: (1)
brokers: (2)
- tls://kafka-broker1.example.com:9093/
- tls://kafka-broker2.example.com:9093/
topic: app-topic (3)
...
1 Specify a kafka
key that has abrokers
andtopic
key.2 Use the brokers
key to specify an array of one or more brokers.3 Use the topic
key to specify the target topic that will receive the logs.Create the CR object:
$ oc create -f <file-name>.yaml
The Red Hat OpenShift Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd pods to force them to redeploy.
$ oc delete pod --selector logging-infra=fluentd
Forwarding application logs from specific projects
You can use the Cluster Log Forwarder to send a copy of the application logs from specific projects to an external log aggregator. You can do this in addition to, or instead of, using the default Elasticsearch log store. You must also configure the external log aggregator to receive log data from OKD.
To configure forwarding application logs from a project, create a ClusterLogForwarder
custom resource (CR) with at least one input from a project, optional outputs for other log aggregators, and pipelines that use those inputs and outputs.
Prerequisites
- You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Procedure
Create a
ClusterLogForwarder
CR YAML file similar to the following:apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance (1)
namespace: openshift-logging (2)
spec:
outputs:
- name: fluentd-server-secure (3)
type: fluentdForward (4)
url: 'tls://fluentdserver.security.example.com:24224' (5)
secret: (6)
name: fluentd-secret
- name: fluentd-server-insecure
type: fluentdForward
url: 'tcp://fluentdserver.home.example.com:24224'
inputs: (7)
- name: my-app-logs
application:
namespaces:
- my-project
pipelines:
- name: forward-to-fluentd-insecure (8)
inputRefs: (9)
- my-app-logs
outputRefs: (10)
- fluentd-server-insecure
parse: json (11)
labels:
project: "my-project" (12)
- name: forward-to-fluentd-secure (13)
inputRefs:
- application
- audit
- infrastructure
outputRefs:
- fluentd-server-secure
- default
labels:
clusterId: "C1234"
1 The name of the ClusterLogForwarder
CR must beinstance
.2 The namespace for the ClusterLogForwarder
CR must beopenshift-logging
.3 Specify a name for the output. 4 Specify the output type: elasticsearch
,fluentdForward
,syslog
, orkafka
.5 Specify the URL and port of the external log aggregator as a valid absolute URL. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. 6 If using a tls
prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in theopenshift-logging
project and have tls.crt, tls.key, and ca-bundle.crt keys that each point to the certificates they represent.7 Configuration for an input to filter application logs from the specified projects. 8 Configuration for a pipeline to use the input to send project application logs to an external Fluentd instance. 9 The my-app-logs
input.10 The name of the output to use. 11 Optional: Forward structured JSON log entries as JSON objects in the structured
field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes thestructured
field and instead sends the log entry to the default index,app-00000x
.12 Optional: String. One or more labels to add to the logs. 13 Configuration for a pipeline to send logs to other log aggregators. Optional: Specify a name for the pipeline.
Specify which log types should be forwarded using that pipeline:
application,
infrastructure
, oraudit
.Specify the output to use with that pipeline for forwarding the logs.
Optional: Specify the
default
output to forward logs to the internal Elasticsearch instance.Optional: String. One or more labels to add to the logs.
Create the CR object:
$ oc create -f <file-name>.yaml
Forwarding application logs from specific pods
As a cluster administrator, you can use Kubernetes pod labels to gather log data from specific pods and forward it to a log collector.
Suppose that you have an application composed of pods running alongside other pods in various namespaces. If those pods have labels that identify the application, you can gather and output their log data to a specific log collector.
To specify the pod labels, you use one or more matchLabels
key-value pairs. If you specify multiple key-value pairs, the pods must match all of them to be selected.
Procedure
Create a
ClusterLogForwarder
custom resource (CR) YAML file.In the YAML file, specify the pod labels using simple equality-based selectors under
inputs[].name.application.selector.matchLabels
, as shown in the following example.Example
ClusterLogForwarder
CR YAML fileapiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance (1)
namespace: openshift-logging (2)
spec:
pipelines:
- inputRefs: [ myAppLogData ] (3)
outputRefs: [ default ] (4)
parse: json (5)
inputs: (6)
- name: myAppLogData
application:
selector:
matchLabels: (7)
environment: production
app: nginx
namespaces: (8)
- app1
- app2
outputs: (9)
- default
...
1 The name of the ClusterLogForwarder
CR must beinstance
.2 The namespace for the ClusterLogForwarder
CR must beopenshift-logging
.3 Specify one or more comma-separated values from inputs[].name
.4 Specify one or more comma-separated values from outputs[]
.5 Optional: Forward structured JSON log entries as JSON objects in the structured
field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes thestructured
field and instead sends the log entry to the default index,app-00000x
.6 Define a unique inputs[].name
for each application that has a unique set of pod labels.7 Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs. 8 Optional: Specify one or more namespaces. 9 Specify one or more outputs to forward your log data to. The optional default
output shown here sends log data to the internal Elasticsearch instance.Optional: To restrict the gathering of log data to specific namespaces, use
inputs[].name.application.namespaces
, as shown in the preceding example.Optional: You can send log data from additional applications that have different pod labels to the same pipeline.
For each unique combination of pod labels, create an additional
inputs[].name
section similar to the one shown.Update the
selectors
to match the pod labels of this application.Add the new
inputs[].name
value toinputRefs
. For example:- inputRefs: [ myAppLogData, myOtherAppLogData ]
Create the CR object:
$ oc create -f <file-name>.yaml
Additional resources
- For more information on
matchLabels
in Kubernetes, see Resources that support set-based requirements.
Forwarding logs using the legacy Fluentd method
You can use the Fluentd forward protocol to send logs to destinations outside of your OKD cluster by creating a configuration file and config map. You are responsible for configuring the external log aggregator to receive log data from OKD.
This method for forwarding logs is deprecated in OKD and will be removed in a future release. |
The forward protocols are provided with the Fluentd image as of v1.4.0.
To send logs using the Fluentd forward protocol, create a configuration file called secure-forward.conf
, that points to an external log aggregator. Then, use that file to create a config map called called secure-forward
in the openshift-logging
project, which OKD uses when forwarding the logs.
Prerequisites
- You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Sample Fluentd configuration file
<store>
@type forward
<security>
self_hostname ${hostname}
shared_key "fluent-receiver"
</security>
transport tls
tls_verify_hostname false
tls_cert_path '/etc/ocp-forward/ca-bundle.crt'
<buffer>
@type file
path '/var/lib/fluentd/secureforwardlegacy'
queued_chunks_limit_size "1024"
chunk_limit_size "1m"
flush_interval "5s"
flush_at_shutdown "false"
flush_thread_count "2"
retry_max_interval "300"
retry_forever true
overflow_action "#{ENV['BUFFER_QUEUE_FULL_ACTION'] || 'throw_exception'}"
</buffer>
<server>
host fluent-receiver.example.com
port 24224
</server>
</store>
Procedure
To configure OKD to forward logs using the legacy Fluentd method:
Create a configuration file named
secure-forward
and specify parameters similar to the following within the<store>
stanza:<store>
@type forward
<security>
self_hostname ${hostname}
shared_key <key> (1)
</security>
transport tls (2)
tls_verify_hostname <value> (3)
tls_cert_path <path_to_file> (4)
<buffer> (5)
@type file
path '/var/lib/fluentd/secureforwardlegacy'
queued_chunks_limit_size "#{ENV['BUFFER_QUEUE_LIMIT'] || '1024' }"
chunk_limit_size "#{ENV['BUFFER_SIZE_LIMIT'] || '1m' }"
flush_interval "#{ENV['FORWARD_FLUSH_INTERVAL'] || '5s'}"
flush_at_shutdown "#{ENV['FLUSH_AT_SHUTDOWN'] || 'false'}"
flush_thread_count "#{ENV['FLUSH_THREAD_COUNT'] || 2}"
retry_max_interval "#{ENV['FORWARD_RETRY_WAIT'] || '300'}"
retry_forever true
</buffer>
<server>
name (6)
host (7)
hostlabel (8)
port (9)
</server>
<server> (10)
name
host
</server>
1 Enter the shared key between nodes. 2 Specify tls
to enable TLS validation.3 Set to true
to verify the server cert hostname. Set tofalse
to ignore server cert hostname.4 Specify the path to the private CA certificate file as /etc/ocp-forward/ca_cert.pem
.5 Specify the Fluentd buffer parameters as needed. 6 Optionally, enter a name for this server. 7 Specify the hostname or IP of the server. 8 Specify the host label of the server. 9 Specify the port of the server. 10 Optionally, add additional servers. If you specify two or more servers, forward uses these server nodes in a round-robin order. To use Mutual TLS (mTLS) authentication, see the Fluentd documentation for information about client certificate, key parameters, and other settings.
Create a config map named
secure-forward
in theopenshift-logging
project from the configuration file:$ oc create configmap secure-forward --from-file=secure-forward.conf -n openshift-logging
The Red Hat OpenShift Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd pods to force them to redeploy.
$ oc delete pod --selector logging-infra=fluentd
Forwarding logs using the legacy syslog method
You can use the syslog RFC3164 protocol to send logs to destinations outside of your OKD cluster by creating a configuration file and config map. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from OKD.
This method for forwarding logs is deprecated in OKD and will be removed in a future release. |
There are two versions of the syslog protocol:
out_syslog: The non-buffered implementation, which communicates through UDP, does not buffer data and writes out results immediately.
out_syslog_buffered: The buffered implementation, which communicates through TCP and buffers data into chunks.
To send logs using the syslog protocol, create a configuration file called syslog.conf
, with the information needed to forward the logs. Then, use that file to create a config map called syslog
in the openshift-logging
project, which OKD uses when forwarding the logs.
Prerequisites
- You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Sample syslog configuration file
<store>
@type syslog_buffered
remote_syslog rsyslogserver.example.com
port 514
hostname ${hostname}
remove_tag_prefix tag
facility local0
severity info
use_record true
payload_key message
rfc 3164
</store>
You can configure the following syslog
parameters. For more information, see the syslog RFC3164.
facility: The syslog facility. The value can be a decimal integer or a case-insensitive keyword:
0
orkern
for kernel messages1
oruser
for user-level messages, the default.2
ormail
for the mail system3
ordaemon
for the system daemons4
orauth
for the security/authentication messages5
orsyslog
for messages generated internally by syslogd6
orlpr
for the line printer subsystem7
ornews
for the network news subsystem8
oruucp
for the UUCP subsystem9
orcron
for the clock daemon10
orauthpriv
for security authentication messages11
orftp
for the FTP daemon12
orntp
for the NTP subsystem13
orsecurity
for the syslog audit logs14
orconsole
for the syslog alert logs15
orsolaris-cron
for the scheduling daemon16
–23
orlocal0
–local7
for locally used facilities
payloadKey: The record field to use as payload for the syslog message.
rfc: The RFC to be used for sending logs using syslog.
severity: The syslog severity to set on outgoing syslog records. The value can be a decimal integer or a case-insensitive keyword:
0
orEmergency
for messages indicating the system is unusable1
orAlert
for messages indicating action must be taken immediately2
orCritical
for messages indicating critical conditions3
orError
for messages indicating error conditions4
orWarning
for messages indicating warning conditions5
orNotice
for messages indicating normal but significant conditions6
orInformational
for messages indicating informational messages7
orDebug
for messages indicating debug-level messages, the default
tag: The record field to use as a tag on the syslog message.
trimPrefix: The prefix to remove from the tag.
Procedure
To configure OKD to forward logs using the legacy configuration methods:
Create a configuration file named
syslog.conf
and specify parameters similar to the following within the<store>
stanza:<store>
@type <type> (1)
remote_syslog <syslog-server> (2)
port 514 (3)
hostname ${hostname}
remove_tag_prefix <prefix> (4)
facility <value>
severity <value>
use_record <value>
payload_key message
rfc 3164 (5)
</store>
1 Specify the protocol to use, either: syslog
orsyslog_buffered
.2 Specify the FQDN or IP address of the syslog server. 3 Specify the port of the syslog server. 4 Optional: Specify the appropriate syslog parameters, for example: Parameter to remove the specified
tag
field from the syslog prefix.Parameter to set the specified field as the syslog key.
Parameter to specify the syslog log facility or source.
Parameter to specify the syslog log severity.
Parameter to use the severity and facility from the record if available. If
true
, thecontainer_name
,namespace_name
, andpod_name
are included in the output content.Parameter to specify the key to set the payload of the syslog message. Defaults to
message
.
5 With the legacy syslog method, you must specify 3164
for therfc
value.Create a config map named
syslog
in theopenshift-logging
project from the configuration file:$ oc create configmap syslog --from-file=syslog.conf -n openshift-logging
The Red Hat OpenShift Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd pods to force them to redeploy.
$ oc delete pod --selector logging-infra=fluentd