06-2.部署高可用 kube-apiserver 集群

本文档讲解部署一个三实例 kube-apiserver 集群的步骤,它们通过 kube-nginx 进行代理访问,从而保证服务可用性。

注意:如果没有特殊指明,本文档的所有操作均在 zhangjun-k8s01 节点上执行,然后远程分发文件和执行命令。

准备工作

下载最新版本的二进制文件、安装和配置 flanneld 参考:06-1.部署master节点.md

创建 kubernetes 证书和私钥

创建证书签名请求:

  1. cd /opt/k8s/work
  2. source /opt/k8s/bin/environment.sh
  3. cat > kubernetes-csr.json <<EOF
  4. {
  5. "CN": "kubernetes",
  6. "hosts": [
  7. "127.0.0.1",
  8. "172.27.137.240",
  9. "172.27.137.239",
  10. "172.27.137.238",
  11. "${CLUSTER_KUBERNETES_SVC_IP}",
  12. "kubernetes",
  13. "kubernetes.default",
  14. "kubernetes.default.svc",
  15. "kubernetes.default.svc.cluster",
  16. "kubernetes.default.svc.cluster.local."
  17. ],
  18. "key": {
  19. "algo": "rsa",
  20. "size": 2048
  21. },
  22. "names": [
  23. {
  24. "C": "CN",
  25. "ST": "BeiJing",
  26. "L": "BeiJing",
  27. "O": "k8s",
  28. "OU": "4Paradigm"
  29. }
  30. ]
  31. }
  32. EOF
  • hosts 字段指定授权使用该证书的 IP 和域名列表,这里列出了 master 节点 IP、kubernetes 服务的 IP 和域名;
  • kubernetes 服务 IP 是 apiserver 自动创建的,一般是 --service-cluster-ip-range 参数指定的网段的第一个IP,后续可以通过下面命令获取:

    1. $ kubectl get svc kubernetes
    2. NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    3. kubernetes 10.254.0.1 <none> 443/TCP 1d

生成证书和私钥:

  1. cfssl gencert -ca=/opt/k8s/work/ca.pem \
  2. -ca-key=/opt/k8s/work/ca-key.pem \
  3. -config=/opt/k8s/work/ca-config.json \
  4. -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
  5. ls kubernetes*pem

将生成的证书和私钥文件拷贝到所有 master 节点:

  1. cd /opt/k8s/work
  2. source /opt/k8s/bin/environment.sh
  3. for node_ip in ${NODE_IPS[@]}
  4. do
  5. echo ">>> ${node_ip}"
  6. ssh root@${node_ip} "mkdir -p /etc/kubernetes/cert"
  7. scp kubernetes*.pem root@${node_ip}:/etc/kubernetes/cert/
  8. done

创建加密配置文件

  1. cd /opt/k8s/work
  2. source /opt/k8s/bin/environment.sh
  3. cat > encryption-config.yaml <<EOF
  4. kind: EncryptionConfig
  5. apiVersion: v1
  6. resources:
  7. - resources:
  8. - secrets
  9. providers:
  10. - aescbc:
  11. keys:
  12. - name: key1
  13. secret: ${ENCRYPTION_KEY}
  14. - identity: {}
  15. EOF

将加密配置文件拷贝到 master 节点的 /etc/kubernetes 目录下:

  1. cd /opt/k8s/work
  2. source /opt/k8s/bin/environment.sh
  3. for node_ip in ${NODE_IPS[@]}
  4. do
  5. echo ">>> ${node_ip}"
  6. scp encryption-config.yaml root@${node_ip}:/etc/kubernetes/
  7. done

创建审计策略文件

  1. cd /opt/k8s/work
  2. source /opt/k8s/bin/environment.sh
  3. cat > audit-policy.yaml <<EOF
  4. apiVersion: audit.k8s.io/v1beta1
  5. kind: Policy
  6. rules:
  7. # The following requests were manually identified as high-volume and low-risk, so drop them.
  8. - level: None
  9. resources:
  10. - group: ""
  11. resources:
  12. - endpoints
  13. - services
  14. - services/status
  15. users:
  16. - 'system:kube-proxy'
  17. verbs:
  18. - watch
  19. - level: None
  20. resources:
  21. - group: ""
  22. resources:
  23. - nodes
  24. - nodes/status
  25. userGroups:
  26. - 'system:nodes'
  27. verbs:
  28. - get
  29. - level: None
  30. namespaces:
  31. - kube-system
  32. resources:
  33. - group: ""
  34. resources:
  35. - endpoints
  36. users:
  37. - 'system:kube-controller-manager'
  38. - 'system:kube-scheduler'
  39. - 'system:serviceaccount:kube-system:endpoint-controller'
  40. verbs:
  41. - get
  42. - update
  43. - level: None
  44. resources:
  45. - group: ""
  46. resources:
  47. - namespaces
  48. - namespaces/status
  49. - namespaces/finalize
  50. users:
  51. - 'system:apiserver'
  52. verbs:
  53. - get
  54. # Don't log HPA fetching metrics.
  55. - level: None
  56. resources:
  57. - group: metrics.k8s.io
  58. users:
  59. - 'system:kube-controller-manager'
  60. verbs:
  61. - get
  62. - list
  63. # Don't log these read-only URLs.
  64. - level: None
  65. nonResourceURLs:
  66. - '/healthz*'
  67. - /version
  68. - '/swagger*'
  69. # Don't log events requests.
  70. - level: None
  71. resources:
  72. - group: ""
  73. resources:
  74. - events
  75. # node and pod status calls from nodes are high-volume and can be large, don't log responses for expected updates from nodes
  76. - level: Request
  77. omitStages:
  78. - RequestReceived
  79. resources:
  80. - group: ""
  81. resources:
  82. - nodes/status
  83. - pods/status
  84. users:
  85. - kubelet
  86. - 'system:node-problem-detector'
  87. - 'system:serviceaccount:kube-system:node-problem-detector'
  88. verbs:
  89. - update
  90. - patch
  91. - level: Request
  92. omitStages:
  93. - RequestReceived
  94. resources:
  95. - group: ""
  96. resources:
  97. - nodes/status
  98. - pods/status
  99. userGroups:
  100. - 'system:nodes'
  101. verbs:
  102. - update
  103. - patch
  104. # deletecollection calls can be large, don't log responses for expected namespace deletions
  105. - level: Request
  106. omitStages:
  107. - RequestReceived
  108. users:
  109. - 'system:serviceaccount:kube-system:namespace-controller'
  110. verbs:
  111. - deletecollection
  112. # Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data,
  113. # so only log at the Metadata level.
  114. - level: Metadata
  115. omitStages:
  116. - RequestReceived
  117. resources:
  118. - group: ""
  119. resources:
  120. - secrets
  121. - configmaps
  122. - group: authentication.k8s.io
  123. resources:
  124. - tokenreviews
  125. # Get repsonses can be large; skip them.
  126. - level: Request
  127. omitStages:
  128. - RequestReceived
  129. resources:
  130. - group: ""
  131. - group: admissionregistration.k8s.io
  132. - group: apiextensions.k8s.io
  133. - group: apiregistration.k8s.io
  134. - group: apps
  135. - group: authentication.k8s.io
  136. - group: authorization.k8s.io
  137. - group: autoscaling
  138. - group: batch
  139. - group: certificates.k8s.io
  140. - group: extensions
  141. - group: metrics.k8s.io
  142. - group: networking.k8s.io
  143. - group: policy
  144. - group: rbac.authorization.k8s.io
  145. - group: scheduling.k8s.io
  146. - group: settings.k8s.io
  147. - group: storage.k8s.io
  148. verbs:
  149. - get
  150. - list
  151. - watch
  152. # Default level for known APIs
  153. - level: RequestResponse
  154. omitStages:
  155. - RequestReceived
  156. resources:
  157. - group: ""
  158. - group: admissionregistration.k8s.io
  159. - group: apiextensions.k8s.io
  160. - group: apiregistration.k8s.io
  161. - group: apps
  162. - group: authentication.k8s.io
  163. - group: authorization.k8s.io
  164. - group: autoscaling
  165. - group: batch
  166. - group: certificates.k8s.io
  167. - group: extensions
  168. - group: metrics.k8s.io
  169. - group: networking.k8s.io
  170. - group: policy
  171. - group: rbac.authorization.k8s.io
  172. - group: scheduling.k8s.io
  173. - group: settings.k8s.io
  174. - group: storage.k8s.io
  175. # Default level for all other requests.
  176. - level: Metadata
  177. omitStages:
  178. - RequestReceived
  179. EOF

分发审计策略文件:

  1. cd /opt/k8s/work
  2. source /opt/k8s/bin/environment.sh
  3. for node_ip in ${NODE_IPS[@]}
  4. do
  5. echo ">>> ${node_ip}"
  6. scp audit-policy.yaml root@${node_ip}:/etc/kubernetes/audit-policy.yaml
  7. done

创建后续访问 metrics-server 使用的证书

创建证书签名请求:

  1. cat > proxy-client-csr.json <<EOF
  2. {
  3. "CN": "aggregator",
  4. "hosts": [],
  5. "key": {
  6. "algo": "rsa",
  7. "size": 2048
  8. },
  9. "names": [
  10. {
  11. "C": "CN",
  12. "ST": "BeiJing",
  13. "L": "BeiJing",
  14. "O": "k8s",
  15. "OU": "4Paradigm"
  16. }
  17. ]
  18. }
  19. EOF
  • CN 名称需要位于 kube-apiserver 的 --requestheader-allowed-names 参数中,否则后续访问 metrics 时会提示权限不足。

生成证书和私钥:

  1. cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
  2. -ca-key=/etc/kubernetes/cert/ca-key.pem \
  3. -config=/etc/kubernetes/cert/ca-config.json \
  4. -profile=kubernetes proxy-client-csr.json | cfssljson -bare proxy-client
  5. ls proxy-client*.pem

将生成的证书和私钥文件拷贝到所有 master 节点:

  1. source /opt/k8s/bin/environment.sh
  2. for node_ip in ${NODE_IPS[@]}
  3. do
  4. echo ">>> ${node_ip}"
  5. scp proxy-client*.pem root@${node_ip}:/etc/kubernetes/cert/
  6. done

创建 kube-apiserver systemd unit 模板文件

  1. cd /opt/k8s/work
  2. source /opt/k8s/bin/environment.sh
  3. cat > kube-apiserver.service.template <<EOF
  4. [Unit]
  5. Description=Kubernetes API Server
  6. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  7. After=network.target
  8. [Service]
  9. WorkingDirectory=${K8S_DIR}/kube-apiserver
  10. ExecStart=/opt/k8s/bin/kube-apiserver \\
  11. --advertise-address=##NODE_IP## \\
  12. --default-not-ready-toleration-seconds=360 \\
  13. --default-unreachable-toleration-seconds=360 \\
  14. --feature-gates=DynamicAuditing=true \\
  15. --max-mutating-requests-inflight=2000 \\
  16. --max-requests-inflight=4000 \\
  17. --default-watch-cache-size=200 \\
  18. --delete-collection-workers=2 \\
  19. --encryption-provider-config=/etc/kubernetes/encryption-config.yaml \\
  20. --etcd-cafile=/etc/kubernetes/cert/ca.pem \\
  21. --etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \\
  22. --etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \\
  23. --etcd-servers=${ETCD_ENDPOINTS} \\
  24. --bind-address=##NODE_IP## \\
  25. --secure-port=6443 \\
  26. --tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \\
  27. --tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \\
  28. --insecure-port=0 \\
  29. --audit-dynamic-configuration \\
  30. --audit-log-maxage=15 \\
  31. --audit-log-maxbackup=3 \\
  32. --audit-log-maxsize=100 \\
  33. --audit-log-truncate-enabled \\
  34. --audit-log-path=${K8S_DIR}/kube-apiserver/audit.log \\
  35. --audit-policy-file=/etc/kubernetes/audit-policy.yaml \\
  36. --profiling \\
  37. --anonymous-auth=false \\
  38. --client-ca-file=/etc/kubernetes/cert/ca.pem \\
  39. --enable-bootstrap-token-auth \\
  40. --requestheader-allowed-names="aggregator" \\
  41. --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\
  42. --requestheader-extra-headers-prefix="X-Remote-Extra-" \\
  43. --requestheader-group-headers=X-Remote-Group \\
  44. --requestheader-username-headers=X-Remote-User \\
  45. --service-account-key-file=/etc/kubernetes/cert/ca.pem \\
  46. --authorization-mode=Node,RBAC \\
  47. --runtime-config=api/all=true \\
  48. --enable-admission-plugins=NodeRestriction \\
  49. --allow-privileged=true \\
  50. --apiserver-count=3 \\
  51. --event-ttl=168h \\
  52. --kubelet-certificate-authority=/etc/kubernetes/cert/ca.pem \\
  53. --kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \\
  54. --kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \\
  55. --kubelet-https=true \\
  56. --kubelet-timeout=10s \\
  57. --proxy-client-cert-file=/etc/kubernetes/cert/proxy-client.pem \\
  58. --proxy-client-key-file=/etc/kubernetes/cert/proxy-client-key.pem \\
  59. --service-cluster-ip-range=${SERVICE_CIDR} \\
  60. --service-node-port-range=${NODE_PORT_RANGE} \\
  61. --logtostderr=true \\
  62. --v=2
  63. Restart=on-failure
  64. RestartSec=10
  65. Type=notify
  66. LimitNOFILE=65536
  67. [Install]
  68. WantedBy=multi-user.target
  69. EOF
  • --advertise-address:apiserver 对外通告的 IP(kubernetes 服务后端节点 IP);
  • --default-*-toleration-seconds:设置节点异常相关的阈值;
  • --max-*-requests-inflight:请求相关的最大阈值;
  • --etcd-*:访问 etcd 的证书和 etcd 服务器地址;
  • --experimental-encryption-provider-config:指定用于加密 etcd 中 secret 的配置;
  • --bind-address: https 监听的 IP,不能为 127.0.0.1,否则外界不能访问它的安全端口 6443;
  • --secret-port:https 监听端口;
  • --insecure-port=0:关闭监听 http 非安全端口(8080);
  • --tls-*-file:指定 apiserver 使用的证书、私钥和 CA 文件;
  • --audit-*:配置审计策略和审计日志文件相关的参数;
  • --client-ca-file:验证 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)请求所带的证书;
  • --enable-bootstrap-token-auth:启用 kubelet bootstrap 的 token 认证;
  • --requestheader-*:kube-apiserver 的 aggregator layer 相关的配置参数,proxy-client & HPA 需要使用;
  • --requestheader-client-ca-file:用于签名 --proxy-client-cert-file--proxy-client-key-file 指定的证书;在启用了 metric aggregator 时使用;
  • --requestheader-allowed-names:不能为空,值为逗号分割的 --proxy-client-cert-file 证书的 CN 名称,这里设置为 “aggregator”;
  • --service-account-key-file:签名 ServiceAccount Token 的公钥文件,kube-controller-manager 的 --service-account-private-key-file 指定私钥文件,两者配对使用;
  • --runtime-config=api/all=true: 启用所有版本的 APIs,如 autoscaling/v2alpha1;
  • --authorization-mode=Node,RBAC--anonymous-auth=false: 开启 Node 和 RBAC 授权模式,拒绝未授权的请求;
  • --enable-admission-plugins:启用一些默认关闭的 plugins;
  • --allow-privileged:运行执行 privileged 权限的容器;
  • --apiserver-count=3:指定 apiserver 实例的数量;
  • --event-ttl:指定 events 的保存时间;
  • --kubelet-*:如果指定,则使用 https 访问 kubelet APIs;需要为证书对应的用户(上面 kubernetes*.pem 证书的用户为 kubernetes) 用户定义 RBAC 规则,否则访问 kubelet API 时提示未授权;
  • --proxy-client-*:apiserver 访问 metrics-server 使用的证书;
  • --service-cluster-ip-range: 指定 Service Cluster IP 地址段;
  • --service-node-port-range: 指定 NodePort 的端口范围;

如果 kube-apiserver 机器没有运行 kube-proxy,则还需要添加 --enable-aggregator-routing=true 参数;

关于 --requestheader-XXX 相关参数,参考:

注意:

  1. requestheader-client-ca-file 指定的 CA 证书,必须具有 client auth and server auth;
  2. 如果 --requestheader-allowed-names 不为空,且 --proxy-client-cert-file 证书的 CN 名称不在 allowed-names 中,则后续查看 node 或 pods 的 metrics 失败,提示:
    1. [root@zhangjun-k8s01 1.8+]# kubectl top nodes
    2. Error from server (Forbidden): nodes.metrics.k8s.io is forbidden: User "aggregator" cannot list resource "nodes" in API group "metrics.k8s.io" at the cluster scope

为各节点创建和分发 kube-apiserver systemd unit 文件

替换模板文件中的变量,为各节点生成 systemd unit 文件:

  1. cd /opt/k8s/work
  2. source /opt/k8s/bin/environment.sh
  3. for (( i=0; i < 3; i++ ))
  4. do
  5. sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-apiserver.service.template > kube-apiserver-${NODE_IPS[i]}.service
  6. done
  7. ls kube-apiserver*.service
  • NODE_NAMES 和 NODE_IPS 为相同长度的 bash 数组,分别为节点名称和对应的 IP;

分发生成的 systemd unit 文件:

  1. cd /opt/k8s/work
  2. source /opt/k8s/bin/environment.sh
  3. for node_ip in ${NODE_IPS[@]}
  4. do
  5. echo ">>> ${node_ip}"
  6. scp kube-apiserver-${node_ip}.service root@${node_ip}:/etc/systemd/system/kube-apiserver.service
  7. done
  • 文件重命名为 kube-apiserver.service;

启动 kube-apiserver 服务

  1. source /opt/k8s/bin/environment.sh
  2. for node_ip in ${NODE_IPS[@]}
  3. do
  4. echo ">>> ${node_ip}"
  5. ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-apiserver"
  6. ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver"
  7. done
  • 启动服务前必须先创建工作目录;

检查 kube-apiserver 运行状态

  1. source /opt/k8s/bin/environment.sh
  2. for node_ip in ${NODE_IPS[@]}
  3. do
  4. echo ">>> ${node_ip}"
  5. ssh root@${node_ip} "systemctl status kube-apiserver |grep 'Active:'"
  6. done

确保状态为 active (running),否则查看日志,确认原因:

  1. journalctl -u kube-apiserver

打印 kube-apiserver 写入 etcd 的数据

  1. source /opt/k8s/bin/environment.sh
  2. ETCDCTL_API=3 etcdctl \
  3. --endpoints=${ETCD_ENDPOINTS} \
  4. --cacert=/opt/k8s/work/ca.pem \
  5. --cert=/opt/k8s/work/etcd.pem \
  6. --key=/opt/k8s/work/etcd-key.pem \
  7. get /registry/ --prefix --keys-only

检查集群信息

  1. $ kubectl cluster-info
  2. Kubernetes master is running at https://127.0.0.1:8443
  3. To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
  4. $ kubectl get all --all-namespaces
  5. NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  6. default service/kubernetes ClusterIP 10.254.0.1 <none> 443/TCP 12m
  7. $ kubectl get componentstatuses
  8. NAME STATUS MESSAGE ERROR
  9. controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
  10. scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
  11. etcd-0 Healthy {"health":"true"}
  12. etcd-2 Healthy {"health":"true"}
  13. etcd-1 Healthy {"health":"true"}
  1. 如果执行 kubectl 命令式时输出如下错误信息,则说明使用的 ~/.kube/config 文件不对,请切换到正确的账户后再执行该命令:

    The connection to the server localhost:8080 was refused - did you specify the right host or port?

  2. 执行 kubectl get componentstatuses 命令时,apiserver 默认向 127.0.0.1 发送请求。当 controller-manager、scheduler 以集群模式运行时,有可能和 kube-apiserver 不在一台机器上,这时 controller-manager 或 scheduler 的状态为 Unhealthy,但实际上它们工作正常

检查 kube-apiserver 监听的端口

  1. $ sudo netstat -lnpt|grep kube
  2. tcp 0 0 172.27.137.240:6443 0.0.0.0:* LISTEN 101442/kube-apiserv
  • 6443: 接收 https 请求的安全端口,对所有请求做认证和授权;
  • 由于关闭了非安全端口,故没有监听 8080;

授予 kube-apiserver 访问 kubelet API 的权限

在执行 kubectl exec、run、logs 等命令时,apiserver 会将请求转发到 kubelet 的 https 端口。这里定义 RBAC 规则,授权 apiserver 使用的证书(kubernetes.pem)用户名(CN:kuberntes)访问 kubelet API 的权限:

  1. kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes