部署 KSE 并安装日志相关组件

KubeSphere 企业版中需要安装的扩展组件:

  • RadonDB DMP

  • OpenSearch 分布式检索与分析引擎

  • WhizardTelemetry 平台服务

  • WhizardTelemetry 数据流水线

  • WhizardTelemetry 日志管理

  • WhizardTelemetry 审计管理

  • WhizardTelemetry 通知管理

  • WhizardTelemetry 事件管理

禁用 OpenSearch Sink

在安装部署 WhizardTelemetry 日志管理、WhizardTelemetry 审计管理、WhizardTelemetry 事件管理以及 WhizardTelemetry 通知管理前,需要禁用这些扩展组件配置中 opensearch 的 sink。

以安装 WhizardTelemetry 审计管理扩展组件为例,将 sinks.opensearch.enabled 设置为 false。

vector

配置 Kafka

在 KubeSphere 企业版中,安装 RadonDB DMP 扩展组件后,点击顶部导航栏上的grid图标,然后点击 RadonDB DMP 进入数据库管理平台,创建 Kafka 集群以用于收集日志。

vector

vector

启用自动创建 topic

点击 Kafka 集群名称,进入参数管理页签,启用自动创建 topic 的功能。

vector

vector

说明

在 Kafka 集群的详情页左侧可获取 Kafka 的读写地址。

创建 Kafka 用户

  1. 在 Kafka 集群的详情页面,进入 Kafka 用户页签,点击创建开始创建 Kafka 用户。

    vector

  2. 按下图所示设置用户权限。

    vector

获取证书

查看证书相关信息

为了与 Kafka 通信,需要配置相关的证书及文件,具体为 <cluster>-cluster-ca-cert,以及上一个步骤中创建的用户的 user.p12 字段及密码,详细信息可在 KubeSphere 企业版 Web 控制台界面上查询,如下所示。

  1. 点击页面上方的工作台 > 集群管理,进入 host 集群。

  2. 在左侧导航栏选择配置 > 保密字典

  3. 保密字典页面,搜索 cluster-ca-cert,点击 Kafka 集群对应的保密字典进入详情页面,查看 ca-crt 字段的信息。

    vector

  4. 保密字典页面,搜索已创建的 Kafka 用户的名称,点击其对应的保密字典进入详情页面,查看 user.p12user.password 字段的信息。

    vector

生成证书

  1. 在 Kafka 所在集群的节点上,执行以下命令。

    说明

    kafka cluster 为 Kafka 集群的名称,kafka namespace 为 Kafka 所在的 namespace,kafka user 为之前创建的 Kafka 用户。

    1. export kafka_cluster=< kafka cluster >
    2. export kafka_namespace=< kafka namespace >
    3. export kafka_user=< kafka user >
    4. echo -e "apiVersion: v1\ndata:" > kafka-ca.yaml
    5. echo " ca.crt: $(kubectl get secret -n $kafka_namespace ${kafka_cluster}-cluster-ca-cert \
    6. -o jsonpath='{.data.ca\.crt}')" >> kafka-ca.yaml
    7. echo -e "kind: Secret\nmetadata:\n name: kafka-agent-cluster-ca\n labels:\n logging.whizard.io/certification: 'true'\n logging.whizard.io/vector-role: Agent\n \
    8. namespace: kubesphere-logging-system\ntype: Opaque" >> kafka-ca.yaml
    9. echo "---" >> kafka-ca.yaml
    10. echo -e "apiVersion: v1\ndata:" >> kafka-ca.yaml
    11. echo " user.p12: $(kubectl get secret -n $kafka_namespace ${kafka_user} \
    12. -o jsonpath='{.data.user\.p12}')" >> kafka-ca.yaml
    13. echo -e "kind: Secret\nmetadata:\n name: kafka-agent-user-ca\n labels:\n logging.whizard.io/certification: 'true'\n logging.whizard.io/vector-role: Agent\n \
    14. namespace: kubesphere-logging-system\ntype: Opaque" >> kafka-ca.yaml

    此命令会生成 kafka-ca.yaml 文件,包含 kafka-agent-cluster-ca 以及 kafka-agent-user-ca 两个 secret 文件,分别含有上一个步骤中的 ca.crt 以及 user.p12 信息。示例如下:

    1. apiVersion: v1
    2. data:
    3. ca.crt: xxx
    4. kind: Secret
    5. metadata:
    6. name: kafka-agent-cluster-ca
    7. labels:
    8. logging.whizard.io/certification: 'true'
    9. logging.whizard.io/vector-role: Agent
    10. namespace: kubesphere-logging-system
    11. type: Opaque
    12. ---
    13. apiVersion: v1
    14. data:
    15. user.p12: xxxx
    16. kind: Secret
    17. metadata:
    18. name: kafka-agent-user-ca
    19. labels:
    20. logging.whizard.io/certification: 'true'
    21. logging.whizard.io/vector-role: Agent
    22. namespace: kubesphere-logging-system
  2. kafka-ca.yaml 文件复制到需要收集日志数据的集群节点上,执行以下命令。

    1. kubectl apply -f kafka-ca.yaml

    此命令会在 kubesphere-logging-system 项目下创建 kafka-agent-cluster-ca 以及 kafka-agent-user-ca 两个 secret 文件。vector-config 会自动加载这两个 secret,并且在 vector 中配置相关证书。

创建 Kafka 日志接收器

  1. cat <<EOF | kubectl apply -f -
  2. kind: Secret
  3. apiVersion: v1
  4. metadata:
  5. name: vector-agent-auditing-sink-kafka
  6. namespace: kubesphere-logging-system
  7. labels:
  8. logging.whizard.io/component: auditing
  9. logging.whizard.io/enable: 'true'
  10. logging.whizard.io/vector-role: Agent
  11. annotations:
  12. kubesphere.io/creator: admin
  13. stringData:
  14. sink.yaml: >-
  15. sinks:
  16. kafka_auditing:
  17. type: "kafka"
  18. topic: "vector-{{ .cluster }}-auditing"
  19. # 逗号分隔的 Kafka bootstrap servers 如:"10.14.22.123:9092,10.14.23.332:9092"
  20. bootstrap_servers: "172.31.73.214:32239"
  21. librdkafka_options:
  22. security.protocol: "ssl"
  23. ssl.endpoint.identification.algorithm: "none"
  24. ssl.ca.location: "/etc/vector/custom/certification/ca.crt"
  25. ssl.keystore.location: "/etc/vector/custom/certification/user.p12"
  26. ssl.keystore.password: "yj5nwJLVqyII1ZHZCW2RQwJcyjKo3B9o"
  27. encoding:
  28. codec: "json"
  29. inputs:
  30. - auditing_remapped
  31. batch:
  32. max_events: 100
  33. timeout_secs: 10
  34. type: Opaque
  35. ---
  36. kind: Secret
  37. apiVersion: v1
  38. metadata:
  39. name: vector-agent-events-sink-kafka
  40. namespace: kubesphere-logging-system
  41. labels:
  42. logging.whizard.io/component: events
  43. logging.whizard.io/enable: 'true'
  44. logging.whizard.io/vector-role: Agent
  45. annotations:
  46. kubesphere.io/creator: admin
  47. stringData:
  48. sink.yaml: >-
  49. sinks:
  50. kafka_events:
  51. type: "kafka"
  52. topic: "vector-{{ .cluster }}-events"
  53. bootstrap_servers: "172.31.73.214:32239"
  54. librdkafka_options:
  55. security.protocol: "ssl"
  56. ssl.endpoint.identification.algorithm: "none"
  57. ssl.ca.location: "/etc/vector/custom/certification/ca.crt"
  58. ssl.keystore.location: "/etc/vector/custom/certification/user.p12"
  59. ssl.keystore.password: "yj5nwJLVqyII1ZHZCW2RQwJcyjKo3B9o"
  60. encoding:
  61. codec: "json"
  62. inputs:
  63. - kube_events_remapped
  64. batch:
  65. max_events: 100
  66. timeout_secs: 10
  67. type: Opaque
  68. ---
  69. kind: Secret
  70. apiVersion: v1
  71. metadata:
  72. name: vector-agent-logs-sink-kafka
  73. namespace: kubesphere-logging-system
  74. labels:
  75. logging.whizard.io/component: logs
  76. logging.whizard.io/enable: 'true'
  77. logging.whizard.io/vector-role: Agent
  78. annotations:
  79. kubesphere.io/creator: admin
  80. stringData:
  81. sink.yaml: >-
  82. sinks:
  83. kafka_logs:
  84. type: "kafka"
  85. topic: "vector-{{ .cluster }}-logs"
  86. bootstrap_servers: "172.31.73.214:32239"
  87. librdkafka_options:
  88. security.protocol: "ssl"
  89. ssl.endpoint.identification.algorithm: "none"
  90. ssl.ca.location: "/etc/vector/custom/certification/ca.crt"
  91. ssl.keystore.location: "/etc/vector/custom/certification/user.p12"
  92. ssl.keystore.password: "yj5nwJLVqyII1ZHZCW2RQwJcyjKo3B9o"
  93. encoding:
  94. codec: "json"
  95. inputs:
  96. - kube_logs_remapped
  97. - systemd_logs_remapped
  98. batch:
  99. max_events: 100
  100. timeout_secs: 10
  101. type: Opaque
  102. ---
  103. apiVersion: v1
  104. kind: Secret
  105. metadata:
  106. name: vector-aggregator-notification-history-sink-kafka
  107. namespace: kubesphere-logging-system
  108. labels:
  109. logging.whizard.io/component: "notification-history"
  110. logging.whizard.io/vector-role: Aggregator
  111. logging.whizard.io/enable: "true"
  112. stringData:
  113. sink.yaml: >-
  114. sinks:
  115. kafka_notification_history:
  116. type: "kafka"
  117. topic: "vector-{{ .cluster }}-notification-history"
  118. bootstrap_servers: "172.31.73.214:32239"
  119. librdkafka_options:
  120. security.protocol: "ssl"
  121. ssl.endpoint.identification.algorithm: "none"
  122. ssl.ca.location: "/etc/vector/custom/certification/ca.crt"
  123. ssl.keystore.location: "/etc/vector/custom/certification/user.p12"
  124. ssl.keystore.password: "yj5nwJLVqyII1ZHZCW2RQwJcyjKo3B9o"
  125. encoding:
  126. codec: "json"
  127. inputs:
  128. - notification_history_remapped
  129. batch:
  130. max_events: 100
  131. timeout_secs: 10
  132. type: Opaque
  133. EOF