Elastic Log Alerting

Learn how to create an async function to find out error logs.

This document describes how to create an async function to find out error logs.

Overview

This document uses an asynchronous function to analyze the log stream in Kafka to find out the error logs. The async function will then send alerts to Slack. The following diagram illustrates the entire workflow.

Elastic Log Alerting - 图1

Prerequisites

Create a Kafka Server and Topic

  1. Run the following commands to install strimzi-kafka-operator in the default namespace.

    1. helm repo add strimzi https://strimzi.io/charts/
    2. helm install kafka-operator -n default strimzi/strimzi-kafka-operator
  2. Use the following content to create a file kafka.yaml.

    1. apiVersion: kafka.strimzi.io/v1beta2
    2. kind: Kafka
    3. metadata:
    4. name: kafka-logs-receiver
    5. namespace: default
    6. spec:
    7. kafka:
    8. version: 3.1.0
    9. replicas: 1
    10. listeners:
    11. - name: plain
    12. port: 9092
    13. type: internal
    14. tls: false
    15. - name: tls
    16. port: 9093
    17. type: internal
    18. tls: true
    19. config:
    20. offsets.topic.replication.factor: 1
    21. transaction.state.log.replication.factor: 1
    22. transaction.state.log.min.isr: 1
    23. default.replication.factor: 1
    24. min.insync.replicas: 1
    25. inter.broker.protocol.version: "3.1"
    26. storage:
    27. type: ephemeral
    28. zookeeper:
    29. replicas: 1
    30. storage:
    31. type: ephemeral
    32. entityOperator:
    33. topicOperator: {}
    34. userOperator: {}
    35. ---
    36. apiVersion: kafka.strimzi.io/v1beta2
    37. kind: KafkaTopic
    38. metadata:
    39. name: logs
    40. namespace: default
    41. labels:
    42. strimzi.io/cluster: kafka-logs-receiver
    43. spec:
    44. partitions: 10
    45. replicas: 1
    46. config:
    47. retention.ms: 7200000
    48. segment.bytes: 1073741824
  3. Run the following command to deploy a 1-replica Kafka server named kafka-logs-receiver and 1-replica Kafka topic named logs in the default namespace.

    1. kubectl apply -f kafka.yaml
  4. Run the following command to check pod status and wait for Kafka and Zookeeper to be up and running.

    1. $ kubectl get po
    2. NAME READY STATUS RESTARTS AGE
    3. kafka-logs-receiver-entity-operator-57dc457ccc-tlqqs 3/3 Running 0 8m42s
    4. kafka-logs-receiver-kafka-0 1/1 Running 0 9m13s
    5. kafka-logs-receiver-zookeeper-0 1/1 Running 0 9m46s
    6. strimzi-cluster-operator-687fdd6f77-cwmgm 1/1 Running 0 11m
  5. Run the following commands to view the metadata of the Kafka cluster.

    1. # Starts a utility pod.
    2. $ kubectl run utils --image=arunvelsriram/utils -i --tty --rm
    3. # Checks metadata of the Kafka cluster.
    4. $ kafkacat -L -b kafka-logs-receiver-kafka-brokers:9092

Create a Logs Handler Function

  1. Use the following example YAML file to create a manifest logs-handler-function.yaml and modify the value of spec.image to set your own image registry address.

    1. apiVersion: core.openfunction.io/v1beta1
    2. kind: Function
    3. metadata:
    4. name: logs-async-handler
    5. spec:
    6. version: "v2.0.0"
    7. image: <your registry name>/logs-async-handler:latest
    8. imageCredentials:
    9. name: push-secret
    10. build:
    11. builder: openfunction/builder-go:latest
    12. env:
    13. FUNC_NAME: "LogsHandler"
    14. FUNC_CLEAR_SOURCE: "true"
    15. # Use FUNC_GOPROXY to set the goproxy
    16. # FUNC_GOPROXY: "https://goproxy.cn"
    17. srcRepo:
    18. url: "https://github.com/OpenFunction/samples.git"
    19. sourceSubPath: "functions/async/logs-handler-function/"
    20. revision: "main"
    21. serving:
    22. runtime: "async"
    23. scaleOptions:
    24. keda:
    25. scaledObject:
    26. pollingInterval: 15
    27. minReplicaCount: 0
    28. maxReplicaCount: 10
    29. cooldownPeriod: 60
    30. advanced:
    31. horizontalPodAutoscalerConfig:
    32. behavior:
    33. scaleDown:
    34. stabilizationWindowSeconds: 45
    35. policies:
    36. - type: Percent
    37. value: 50
    38. periodSeconds: 15
    39. scaleUp:
    40. stabilizationWindowSeconds: 0
    41. triggers:
    42. - type: kafka
    43. metadata:
    44. topic: logs
    45. bootstrapServers: kafka-server-kafka-brokers.default.svc.cluster.local:9092
    46. consumerGroup: logs-handler
    47. lagThreshold: "20"
    48. template:
    49. containers:
    50. - name: function
    51. imagePullPolicy: Always
    52. inputs:
    53. - name: kafka
    54. component: kafka-receiver
    55. outputs:
    56. - name: notify
    57. component: notification-manager
    58. operation: "post"
    59. bindings:
    60. kafka-receiver:
    61. type: bindings.kafka
    62. version: v1
    63. metadata:
    64. - name: brokers
    65. value: "kafka-server-kafka-brokers:9092"
    66. - name: authRequired
    67. value: "false"
    68. - name: publishTopic
    69. value: "logs"
    70. - name: topics
    71. value: "logs"
    72. - name: consumerGroup
    73. value: "logs-handler"
    74. notification-manager:
    75. type: bindings.http
    76. version: v1
    77. metadata:
    78. - name: url
    79. value: http://notification-manager-svc.kubesphere-monitoring-system.svc.cluster.local:19093/api/v2/alerts
  2. Run the following command to create the function logs-async-handler.

    1. kubectl apply -f logs-handler-function.yaml
  3. The logs handler function will be triggered by messages from the logs topic in Kafka.