Outputs and ClusterOutputs

See the Logging operator documentation for the full details on how to configure Flows and ClusterFlows.

See Rancher Integration with Logging Services: Troubleshooting for how to resolve memory problems with the logging buffer.

Outputs

The Output resource defines where your Flows can send the log messages. Outputs are the final stage for a logging Flow.

The Output is a namespaced resource, which means only a Flow within the same namespace can access it.

You can use secrets in these definitions, but they must also be in the same namespace.

Outputs can be configured by filling out forms in the Rancher UI.

For the details of Output custom resource, see OutputSpec..

The Rancher UI provides forms for configuring the following Output types:

  • Amazon ElasticSearch
  • Azure Storage
  • Cloudwatch
  • Datadog
  • Elasticsearch
  • File
  • Fluentd
  • GCS
  • Kafka
  • Kinesis Stream
  • LogDNA
  • LogZ
  • Loki
  • New Relic
  • Splunk
  • SumoLogic
  • Syslog

The Rancher UI provides forms for configuring the Output type, target, and access credentials if applicable.

For example configuration for each logging plugin supported by the logging operator, see the Logging operator documentation.

ClusterOutputs

ClusterOutput defines an Output without namespace restrictions. It is only effective when deployed in the same namespace as the logging operator.

ClusterOutputs can be configured by filling out forms in the Rancher UI.

For the details of the ClusterOutput custom resource, see ClusterOutput.

YAML Examples

Once logging is installed, you can use these examples to help craft your own logging pipeline.

Cluster Output to ElasticSearch

Let’s say you wanted to send all logs in your cluster to an elasticsearch cluster. First, we create a cluster Output.

  1. apiVersion: logging.banzaicloud.io/v1beta1
  2. kind: ClusterOutput
  3. metadata:
  4. name: "example-es"
  5. namespace: "cattle-logging-system"
  6. spec:
  7. elasticsearch:
  8. host: elasticsearch.example.com
  9. port: 9200
  10. scheme: http

We have created this ClusterOutput, without elasticsearch configuration, in the same namespace as our operator: cattle-logging-system.. Any time we create a ClusterFlow or ClusterOutput, we have to put it in the cattle-logging-system namespace.

Now that we have configured where we want the logs to go, let’s configure all logs to go to that ClusterOutput.

  1. apiVersion: logging.banzaicloud.io/v1beta1
  2. kind: ClusterFlow
  3. metadata:
  4. name: "all-logs"
  5. namespace: "cattle-logging-system"
  6. spec:
  7. globalOutputRefs:
  8. - "example-es"

We should now see our configured index with logs in it.

Output to Splunk

What if we have an application team who only wants logs from a specific namespaces sent to a splunk server? For this case, we can use namespaced Outputs and Flows.

Before we start, let’s set up that team’s application: coolapp.

  1. apiVersion: v1
  2. kind: Namespace
  3. metadata:
  4. name: devteam
  5. ---
  6. apiVersion: apps/v1
  7. kind: Deployment
  8. metadata:
  9. name: coolapp
  10. namespace: devteam
  11. labels:
  12. app: coolapp
  13. spec:
  14. replicas: 2
  15. selector:
  16. matchLabels:
  17. app: coolapp
  18. template:
  19. metadata:
  20. labels:
  21. app: coolapp
  22. spec:
  23. containers:
  24. - name: generator
  25. image: paynejacob/loggenerator:latest

With coolapp running, we will follow a similar path as when we created a ClusterOutput. However, unlike ClusterOutputs, we create our Output in our application’s namespace.

  1. apiVersion: logging.banzaicloud.io/v1beta1
  2. kind: Output
  3. metadata:
  4. name: "devteam-splunk"
  5. namespace: "devteam"
  6. spec:
  7. splunkHec:
  8. hec_host: splunk.example.com
  9. hec_port: 8088
  10. protocol: http

Once again, let’s feed our Output some logs:

  1. apiVersion: logging.banzaicloud.io/v1beta1
  2. kind: Flow
  3. metadata:
  4. name: "devteam-logs"
  5. namespace: "devteam"
  6. spec:
  7. localOutputRefs:
  8. - "devteam-splunk"

Output to Syslog

Let’s say you wanted to send all logs in your cluster to an syslog server. First, we create a ClusterOutput:

  1. apiVersion: logging.banzaicloud.io/v1beta1
  2. kind: ClusterOutput
  3. metadata:
  4. name: "example-syslog"
  5. namespace: "cattle-logging-system"
  6. spec:
  7. syslog:
  8. buffer:
  9. timekey: 30s
  10. timekey_use_utc: true
  11. timekey_wait: 10s
  12. flush_interval: 5s
  13. format:
  14. type: json
  15. app_name_field: test
  16. host: syslog.example.com
  17. insecure: true
  18. port: 514
  19. transport: tcp

Now that we have configured where we want the logs to go, let’s configure all logs to go to that Output.

  1. apiVersion: logging.banzaicloud.io/v1beta1
  2. kind: ClusterFlow
  3. metadata:
  4. name: "all-logs"
  5. namespace: cattle-logging-system
  6. spec:
  7. globalOutputRefs:
  8. - "example-syslog"

Unsupported Outputs

For the final example, we create an Output to write logs to a destination that is not supported out of the box:

Outputs and ClusterOutputs - 图1Note on syslog:

syslog is a supported Output. However, this example still provides an overview on using unsupported plugins.

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: syslog-config
  5. namespace: cattle-logging-system
  6. type: Opaque
  7. stringData:
  8. fluent-bit.conf: |
  9. [INPUT]
  10. Name forward
  11. Port 24224
  12. [OUTPUT]
  13. Name syslog
  14. InstanceName syslog-output
  15. Match *
  16. Addr syslog.example.com
  17. Port 514
  18. Cluster ranchers
  19. ---
  20. apiVersion: apps/v1
  21. kind: Deployment
  22. metadata:
  23. name: fluentbit-syslog-forwarder
  24. namespace: cattle-logging-system
  25. labels:
  26. output: syslog
  27. spec:
  28. selector:
  29. matchLabels:
  30. output: syslog
  31. template:
  32. metadata:
  33. labels:
  34. output: syslog
  35. spec:
  36. containers:
  37. - name: fluentbit
  38. image: paynejacob/fluent-bit-out-syslog:latest
  39. ports:
  40. - containerPort: 24224
  41. volumeMounts:
  42. - mountPath: "/fluent-bit/etc/"
  43. name: configuration
  44. volumes:
  45. - name: configuration
  46. secret:
  47. secretName: syslog-config
  48. ---
  49. apiVersion: v1
  50. kind: Service
  51. metadata:
  52. name: syslog-forwarder
  53. namespace: cattle-logging-system
  54. spec:
  55. selector:
  56. output: syslog
  57. ports:
  58. - protocol: TCP
  59. port: 24224
  60. targetPort: 24224
  61. ---
  62. apiVersion: logging.banzaicloud.io/v1beta1
  63. kind: ClusterFlow
  64. metadata:
  65. name: all-logs
  66. namespace: cattle-logging-system
  67. spec:
  68. globalOutputRefs:
  69. - syslog
  70. ---
  71. apiVersion: logging.banzaicloud.io/v1beta1
  72. kind: ClusterOutput
  73. metadata:
  74. name: syslog
  75. namespace: cattle-logging-system
  76. spec:
  77. forward:
  78. servers:
  79. - host: "syslog-forwarder.cattle-logging-system"
  80. require_ack_response: false
  81. ignore_network_errors_at_startup: false

Let’s break down what is happening here. First, we create a deployment of a container that has the additional syslog plugin and accepts logs forwarded from another fluentd. Next we create an Output configured as a forwarder to our deployment. The deployment fluentd will then forward all logs to the configured syslog destination.