目的

基于EFK实现日志收集功能,这里我们把elasticsearch和kibana做为Deployment部署。如果需要使用k8s群集外的elasticsearch和kibana。可跳过部署elasticsearch和kibana的阶段,然后使用deamonset(守护式进程)部署fluentd。

修改docker 日志模式

首先修改docker 的日志模式为json,对于CentOS使用的yum安装的的docker-1.12,我们需要使用如下进行配置。

  1. shell># Shell># vi /etc/sysconfig/docker
内容 备注
原始 OPTIONS=’—selinux-enabled —log-driver=journald —signature-verification=false’
修改 OPTIONS=’—selinux-enabled —signature-verification=false’

修改日志配置文件。如没有则创建 Shell>#vi /etc/docker/daemon.json

  1. {
  2. "log-driver": "json-file",
  3. "log-opts": {
  4. "max-size": "10m",
  5. "max-file": "3"
  6. }
  7. }

重启docker shell># systemctl restar docker

部署

创建NameSpace

  1. apiVersion: v1
  2. kind: Namespace
  3. metadata:
  4. name: kube-logging

部署elasticsearch

这里使用deployment部署elasticsearch单机应用,同时在k8s群集内发布服务。服务端口为9200和9300。

  1. kind: Deployment
  2. apiVersion: apps/v1beta2
  3. metadata:
  4. labels:
  5. k8s-app: kubernetes-elasticsearch
  6. name: kubernetes-elasticsearch
  7. namespace: kube-logging
  8. spec:
  9. replicas: 1
  10. revisionHistoryLimit: 10
  11. selector:
  12. matchLabels:
  13. k8s-app: kubernetes-elasticsearch
  14. template:
  15. metadata:
  16. labels:
  17. k8s-app: kubernetes-elasticsearch
  18. spec:
  19. containers:
  20. - name: kubernetes-elasticsearch
  21. image: hub.k8s.com/apps/elasticsearch:6.2.3
  22. ports:
  23. - name: es-data
  24. containerPort: 9200
  25. protocol: TCP
  26. - name: es-cluster
  27. containerPort: 9300
  28. protocol: TCP
  29. volumeMounts:
  30. - mountPath: /usr/share/elasticsearch/data
  31. name: tmp-volume
  32. volumes:
  33. - name: tmp-volume
  34. emptyDir: {}
  35. ---
  36. kind: Service
  37. apiVersion: v1
  38. metadata:
  39. labels:
  40. k8s-app: kubernetes-elasticsearch
  41. name: elasticsearch
  42. namespace: kube-logging
  43. spec:
  44. type: ClusterIP
  45. clusterIP: 10.254.0.202
  46. ports:
  47. - name: es-data
  48. port: 9200
  49. targetPort: 9200
  50. - name: es-cluster
  51. port: 9300
  52. targetPort: 9300
  53. selector:
  54. k8s-app: kubernetes-elasticsearch

部署kibana

创建kibana使用的configmap

这里创建kibana启动需要的配置的文件的configmap,在部署kibana时,我们映射到pod中。在配置文件中,定义了kibana的服务名称,服务主机地址,最重要的是,配置elasticsearch的服务路径和端口。注释部分为elasticsearch启用xpack模式后的安全特性,这里不适用。

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: kibana
  5. namespace: kube-logging
  6. data:
  7. kibana.yml: |
  8. server.name: kibana
  9. server.host: "0"
  10. elasticsearch.url: http://elasticsearch.kube-logging.svc.cluster.local:9200
  11. #elasticsearch.username: elastic
  12. #elasticsearch.password: changeme
  13. #xpack.monitoring.ui.container.elasticsearch.enabled: true

创建kibana的deployment和service

创建名为kubernetes-kibana的deployment的部署,使用kibana 6.2.3部署一个pod,开放5601端口,同时挂载我们生成的名为“kibana”的configmap作为名为kibana.yml的配置文件。
发布服务名为“kibana”的群集服务。

  1. ---
  2. kind: Deployment
  3. apiVersion: apps/v1beta2
  4. metadata:
  5. labels:
  6. k8s-app: kubernetes-kibana
  7. name: kubernetes-kibana
  8. namespace: kube-logging
  9. spec:
  10. replicas: 1
  11. revisionHistoryLimit: 10
  12. selector:
  13. matchLabels:
  14. k8s-app: kubernetes-kibana
  15. template:
  16. metadata:
  17. labels:
  18. k8s-app: kubernetes-kibana
  19. spec:
  20. containers:
  21. - name: kubernetes-elasticsearch
  22. image: hub.k8s.com/apps/kibana:6.2.3
  23. ports:
  24. - name: kibana-web
  25. containerPort: 5601
  26. protocol: TCP
  27. volumeMounts:
  28. - name: config
  29. mountPath: /usr/share/kibana/config/kibana.yml
  30. subPath: kibana.yml
  31. volumes:
  32. - name: config
  33. configMap:
  34. name: kibana
  35. ---
  36. kind: Service
  37. apiVersion: v1
  38. metadata:
  39. labels:
  40. k8s-app: kubernetes-kibana
  41. name: kibana
  42. namespace: kube-logging
  43. spec:
  44. type: ClusterIP
  45. clusterIP: 10.254.0.203
  46. ports:
  47. - name: kibana-web
  48. port: 5601
  49. targetPort: 5601
  50. selector:
  51. k8s-app: kubernetes-kibana

创建kibana的ingress配置

说明:在执行下面命令前,需要先配置k8s的nginx ingress。可以参照《12-A-接入点-nginx ingress》。我们发布域名为kibana.k8s.com的虚拟主机为kibana的虚拟主机,内部服务为kibana,服务端口为5601.

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. name: kibana-ingress
  5. namespace: kube-logging
  6. spec:
  7. rules:
  8. - host: kibana.k8s.com
  9. http:
  10. paths:
  11. - path: /
  12. backend:
  13. serviceName: kibana
  14. servicePort: 5601

部署fluentd

创建RBAC

创建名为fluentd的服务账户,并赋予账户apiGroups的全部权限和获取pods资源,以及可以执行get、list和watch命令。

  1. apiVersion: v1
  2. kind: ServiceAccount
  3. metadata:
  4. name: fluentd
  5. namespace: kube-logging
  6. ---
  7. apiVersion: rbac.authorization.k8s.io/v1beta1
  8. kind: ClusterRole
  9. metadata:
  10. name: fluentd
  11. namespace: kube-logging
  12. rules:
  13. - apiGroups:
  14. - ""
  15. resources:
  16. - pods
  17. verbs:
  18. - get
  19. - list
  20. - watch
  21. ---
  22. kind: ClusterRoleBinding
  23. apiVersion: rbac.authorization.k8s.io/v1beta1
  24. metadata:
  25. name: fluentd
  26. roleRef:
  27. kind: ClusterRole
  28. name: fluentd
  29. apiGroup: rbac.authorization.k8s.io
  30. subjects:
  31. - kind: ServiceAccount
  32. name: fluentd
  33. namespace: kube-logging

创建ConfigMap

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata
  4. name: fluentd
  5. namespace: kube-logging
  6. data:
  7. fluent.conf: |
  8. @include kubernetes.conf
  9. <match **>
  10. type elasticsearch
  11. log_level info
  12. include_tag_key true
  13. host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
  14. port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
  15. scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
  16. user "#{ENV['FLUENT_ELASTICSEARCH_USER']}"
  17. password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}"
  18. reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'true'}"
  19. logstash_prefix "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX'] || 'logstash'}"
  20. logstash_format true
  21. buffer_chunk_limit 2M
  22. buffer_queue_limit 32
  23. flush_interval 5s
  24. max_retry_wait 30
  25. disable_retry_limit
  26. num_threads 8
  27. </match>

创建DaemonSet

  1. apiVersion: extensions/v1beta1
  2. kind: DaemonSet
  3. metadata:
  4. name: fluentd
  5. namespace: kube-logging
  6. labels:
  7. k8s-app: fluentd-logging
  8. version: v1
  9. kubernetes.io/cluster-service: "true"
  10. spec:
  11. template:
  12. metadata:
  13. labels:
  14. k8s-app: fluentd-logging
  15. version: v1
  16. kubernetes.io/cluster-service: "true"
  17. spec:
  18. serviceAccount: fluentd
  19. serviceAccountName: fluentd
  20. containers:
  21. - name: fluentd
  22. image: hub.k8s.com/google-containers/fluentd-kubernetes-daemonset:elasticsearch
  23. env:
  24. - name: FLUENT_ELASTICSEARCH_HOST
  25. value: "elasticsearch.kube-logging.svc.cluster.local"
  26. - name: FLUENT_ELASTICSEARCH_PORT
  27. value: "9200"
  28. - name: FLUENT_ELASTICSEARCH_SCHEME
  29. value: "http"
  30. # X-Pack Authentication
  31. # =====================
  32. #- name: FLUENT_ELASTICSEARCH_USER
  33. # value: "elastic"
  34. #- name: FLUENT_ELASTICSEARCH_PASSWORD
  35. # value: "changeme"
  36. resources:
  37. limits:
  38. memory: 200Mi
  39. requests:
  40. cpu: 100m
  41. memory: 200Mi
  42. volumeMounts:
  43. - name: varlog
  44. mountPath: /var/log
  45. - name: varlibdockercontainers
  46. mountPath: /var/lib/docker/containers
  47. - name: config
  48. mountPath: /fluentd/etc/fluent.conf
  49. subPath: fluent.conf
  50. volumes:
  51. - name: varlog
  52. hostPath:
  53. path: /var/log
  54. - name: varlibdockercontainers
  55. hostPath:
  56. path: /var/lib/docker/containers
  57. - name: config
  58. configMap:
  59. name: fluentd