通过二进制方式安装

分步安装二进制高可用 karmada 集群。

前提条件

服务器

需要 3 个服务器,例如:

  1. +---------------+-----------------+-----------------+
  2. | HostName | Host IP | Public IP |
  3. +---------------+-----------------+-----------------+
  4. | karmada-01 | 172.31.209.245 | 47.242.88.82 |
  5. +---------------+-----------------+-----------------+
  6. | karmada-02 | 172.31.209.246 | |
  7. +---------------+-----------------+-----------------+
  8. | karmada-03 | 172.31.209.247 | |
  9. +---------------+-----------------+-----------------+

公共 IP 不是必需的。这个 IP 用于从公网下载某些 karmada 依赖组件,并通过公网连接到 karmada ApiServer。

DNS 解析

karmada-01karmada-02karmada-03 执行操作。

  1. $ vi /etc/hosts
  2. 172.31.209.245 karmada-01
  3. 172.31.209.246 karmada-02
  4. 172.31.209.247 karmada-03

你也可以使用 “Linux 虚拟服务器”进行负载均衡,不更改 /etc/hosts 文件。

环境

karmada-01 需要以下环境。

Golang:编译 karmada 二进制文件 GCC:编译 nginx(使用云负载均衡时忽略此项)

编译并下载二进制文件

karmada-01 执行操作。

Kubernetes 二进制文件

下载 kubernetes 二进制文件包。

参阅本页下载不同版本和不同架构的二进制文件:https://kubernetes.io/releases/download/#binaries

  1. wget https://dl.k8s.io/v1.23.3/kubernetes-server-linux-amd64.tar.gz
  2. tar -zxvf kubernetes-server-linux-amd64.tar.gz --no-same-owner
  3. cd kubernetes/server/bin
  4. mv kube-apiserver kube-controller-manager kubectl /usr/local/sbin/

etcd 二进制文件

下载 etcd 二进制文件包。

若要使用较新版本的 etcd,请参阅:https://etcd.io/docs/latest/install/

  1. wget https://github.com/etcd-io/etcd/releases/download/v3.5.1/etcd-v3.5.1-linux-amd64.tar.gz
  2. tar -zxvf etcd-v3.5.1-linux-amd64.tar.gz --no-same-owner
  3. cd etcd-v3.5.1-linux-amd64/
  4. mv etcdctl etcd /usr/local/sbin/

Karmada 二进制文件

从源代码编译 karmada 二进制文件。

  1. git clone https://github.com/karmada-io/karmada
  2. cd karmada
  3. make karmada-aggregated-apiserver karmada-controller-manager karmada-scheduler karmada-webhook karmadactl kubectl-karmada
  4. mv _output/bin/linux/amd64/* /usr/local/sbin/

Nginx 二进制文件

从源代码编译 nginx 二进制文件。

  1. wget http://nginx.org/download/nginx-1.21.6.tar.gz
  2. tar -zxvf nginx-1.21.6.tar.gz
  3. cd nginx-1.21.6
  4. ./configure --with-stream --without-http --prefix=/usr/local/karmada-nginx --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module
  5. make && make install
  6. mv /usr/local/karmada-nginx/sbin/nginx /usr/local/karmada-nginx/sbin/karmada-nginx

分发二进制文件

上传二进制文件到 karmada-02karmada-03 服务器。

生成证书

步骤 1:创建 Bash 脚本和配置文件

此脚本将使用 openssl 命令生成证书。 下载此目录

我们分开了 CA 和叶证书生成脚本,若你需要更改叶证书的主体备用名称(又名负载均衡器 IP),你可以重用 CA 证书,并运行 generate_leaf.sh 以仅生成叶证书。

有 3 个 CA:front-proxy-ca、server-ca、etcd/ca。 为什么我们需要 3 个 CA,请参见 PKI 证书和要求CA 重用和冲突

如果你使用他人提供的 etcd,可以忽略 generate_etcd.shcsr_config/etcd

步骤 2:更改 <SERVER_IP>

你需要将 csr_config/**/*.conf 文件中的 <SERVER_IP> 更改为”负载均衡器 IP” 和”服务器 IP”。 如果你仅使用负载均衡器访问服务器,你只需要填写”负载均衡器 IP”。

你正常不需要更改 *.sh 文件。

步骤 3:运行 Shell 脚本

  1. ./generate_ca.sh
  2. ./generate_leaf.sh ca_cert/
  3. ./generate_etcd.sh

步骤 4:检查证书

你可以查看证书的配置,以 karmada.crt 为例。

  1. openssl x509 -noout -text -in karmada.crt

步骤 5:创建 Karmada 配置目录

复制证书到 /etc/karmada/pki 目录。

  1. mkdir -p /etc/karmada/pki
  2. cd ca_cert
  3. cp -r * /etc/karmada/pki
  4. cd ../cert
  5. cp -r * /etc/karmada/pki

创建 Karmada kubeconfig 文件和 etcd 加密密钥

karmada-01 执行操作。

创建 kubeconfig 文件

步骤 1:下载 bash 脚本

下载此文件

步骤 2:执行 bash 脚本

172.31.209.245:5443 是针对 karmada-apiservernginx 代理的地址,我们将在后续设置。 你应将其替换为负载均衡器提供的 “host:port”。

  1. ./create_kubeconfig_file.sh "https://172.31.209.245:5443"

创建 etcd 加密密钥

如果你不需要加密 etcd 中的内容,请忽略本节和对应的 kube-apiserver 启动参数。

  1. export ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
  2. cat > /etc/karmada/encryption-config.yaml <<EOF
  3. kind: EncryptionConfig
  4. apiVersion: v1
  5. resources:
  6. - resources:
  7. - secrets
  8. providers:
  9. - aescbc:
  10. keys:
  11. - name: key1
  12. secret: ${ENCRYPTION_KEY}
  13. - identity: {}
  14. EOF

分发文件

打包 karmada 配置文件并将其复制到其他节点。

  1. cd /etc
  2. tar -cvf karmada.tar karmada
  3. scp karmada.tar karmada-02:/etc/
  4. scp karmada.tar karmada-03:/etc/

karmada-02karmada-03 需要解压缩归档文件。

  1. cd /etc
  2. tar -xvf karmada.tar

健康检查脚本

(1) 你可以使用以下 check_status.sh 检查所有组件是否健康。

下载此文件

注:如果你使用的是 CentOS 7,则需要运行 yum update -y nss curl 来更新 curl 版本,这样 curl 才可能支持 tls 1.3。 如果 curl 仍然不支持 tls 1.3,你需要更新为包含较新 curl 的操作系统版本,或者直接使用 go 程序进行健康检查。

(2) 使用:./check_status.sh

安装 etcd 集群

karmada-01karmada-02karmada-03 执行操作。以 karmada-01 为例。

创建 Systemd 服务

/usr/lib/systemd/system/etcd.service

  1. [Unit]
  2. Description=Etcd Server
  3. After=network.target
  4. After=network-online.target
  5. Wants=network-online.target
  6. Documentation=https://github.com/coreos
  7. [Service]
  8. Type=notify
  9. WorkingDirectory=/var/lib/etcd/
  10. ExecStart=/usr/local/sbin/etcd \
  11. --advertise-client-urls https://172.31.209.245:2379 \
  12. --cert-file /etc/karmada/pki/etcd/server.crt \
  13. --client-cert-auth=true \
  14. --data-dir /var/lib/etcd \
  15. --initial-advertise-peer-urls https://172.31.209.245:2380 \
  16. --initial-cluster "karmada-01=https://172.31.209.245:2380,karmada-02=https://172.31.209.246:2380,karmada-03=https://172.31.209.247:2380" \
  17. --initial-cluster-state new \
  18. --initial-cluster-token etcd-cluster \
  19. --key-file /etc/karmada/pki/etcd/server.key \
  20. --listen-client-urls "https://172.31.209.245:2379,https://127.0.0.1:2379" \
  21. --listen-peer-urls "https://172.31.209.245:2380" \
  22. --name karmada-01 \
  23. --peer-cert-file /etc/karmada/pki/etcd/peer.crt \
  24. --peer-client-cert-auth=true \
  25. --peer-key-file /etc/karmada/pki/etcd/peer.key \
  26. --peer-trusted-ca-file /etc/karmada/pki/etcd/ca.crt \
  27. --snapshot-count 10000 \
  28. --trusted-ca-file /etc/karmada/pki/etcd/ca.crt \
  29. Restart=on-failure
  30. RestartSec=5
  31. LimitNOFILE=65536
  32. [Install]
  33. WantedBy=multi-user.target

注:

karmada-02karmada-03 需要更改的参数为:

--name

--initial-advertise-peer-urls

--listen-peer-urls

--listen-client-urls

--advertise-client-urls

你可以使用 EnvironmentFile 将可变配置与不可变配置分开。

启动 etcd 集群

3 个服务器必须执行以下命令创建 etcd 存储目录。

  1. mkdir /var/lib/etcd/
  2. chmod 700 /var/lib/etcd

启动 etcd:

  1. systemctl daemon-reload
  2. systemctl enable etcd.service
  3. systemctl start etcd.service
  4. systemctl status etcd.service

验证

  1. $ etcdctl --cacert /etc/karmada/pki/etcd/ca.crt \
  2. --cert /etc/karmada/pki/etcd/healthcheck-client.crt \
  3. --key /etc/karmada/pki/etcd/healthcheck-client.key \
  4. --endpoints "172.31.209.245:2379,172.31.209.246:2379,172.31.209.247:2379" \
  5. endpoint status --write-out="table"
  6. +---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  7. | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
  8. +---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  9. | 172.31.209.245:2379 | 689151f8cbf4ee95 | 3.5.1 | 20 kB | false | false | 2 | 9 | 9 | |
  10. | 172.31.209.246:2379 | 5db4dfb6ecc14de7 | 3.5.1 | 20 kB | true | false | 2 | 9 | 9 | |
  11. | 172.31.209.247:2379 | 7e59eef3c816aa57 | 3.5.1 | 20 kB | false | false | 2 | 9 | 9 | |
  12. +---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

安装 kube-apiserver

配置 Nginx

karmada-01 执行操作。

karmada apiserver 配置负载均衡。

/usr/local/karmada-nginx/conf/nginx.conf

  1. worker_processes 2;
  2. events {
  3. worker_connections 1024;
  4. }
  5. stream {
  6. upstream backend {
  7. hash consistent;
  8. server 172.31.209.245:6443 max_fails=3 fail_timeout=30s;
  9. server 172.31.209.246:6443 max_fails=3 fail_timeout=30s;
  10. server 172.31.209.247:6443 max_fails=3 fail_timeout=30s;
  11. }
  12. server {
  13. listen 172.31.209.245:5443;
  14. proxy_connect_timeout 1s;
  15. proxy_pass backend;
  16. }
  17. }

/lib/systemd/system/karmada-nginx.service

  1. [Unit]
  2. Description=The karmada karmada-apiserver nginx proxy server
  3. After=syslog.target network-online.target remote-fs.target nss-lookup.target
  4. Wants=network-online.target
  5. [Service]
  6. Type=forking
  7. ExecStartPre=/usr/local/karmada-nginx/sbin/karmada-nginx -t
  8. ExecStart=/usr/local/karmada-nginx/sbin/karmada-nginx
  9. ExecReload=/usr/local/karmada-nginx/sbin/karmada-nginx -s reload
  10. ExecStop=/bin/kill -s QUIT $MAINPID
  11. PrivateTmp=true
  12. Restart=always
  13. RestartSec=5
  14. StartLimitInterval=0
  15. LimitNOFILE=65536
  16. [Install]
  17. WantedBy=multi-user.target

启动 karmada nginx

  1. systemctl daemon-reload
  2. systemctl enable karmada-nginx.service
  3. systemctl start karmada-nginx.service
  4. systemctl status karmada-nginx.service

创建 kube-apiserver Systemd 服务

karmada-01karmada-02karmada-03 执行操作。以 karmada-01 为例。

/usr/lib/systemd/system/kube-apiserver.service

  1. [Unit]
  2. Description=Kubernetes API Server
  3. Documentation=https://kubernetes.io/docs/home/
  4. After=network.target
  5. [Service]
  6. # 如果你不需要加密 etcd,移除 --encryption-provider-config
  7. ExecStart=/usr/local/sbin/kube-apiserver \
  8. --allow-privileged=true \
  9. --anonymous-auth=false \
  10. --audit-webhook-batch-buffer-size 30000 \
  11. --audit-webhook-batch-max-size 800 \
  12. --authorization-mode "Node,RBAC" \
  13. --bind-address 0.0.0.0 \
  14. --client-ca-file /etc/karmada/pki/server-ca.crt \
  15. --default-watch-cache-size 200 \
  16. --delete-collection-workers 2 \
  17. --disable-admission-plugins "StorageObjectInUseProtection,ServiceAccount" \
  18. --enable-admission-plugins "NodeRestriction" \
  19. --enable-bootstrap-token-auth \
  20. --encryption-provider-config "/etc/karmada/encryption-config.yaml" \
  21. --etcd-cafile /etc/karmada/pki/etcd/ca.crt \
  22. --etcd-certfile /etc/karmada/pki/etcd/apiserver-etcd-client.crt \
  23. --etcd-keyfile /etc/karmada/pki/etcd/apiserver-etcd-client.key \
  24. --etcd-servers "https://172.31.209.245:2379,https://172.31.209.246:2379,https://172.31.209.247:2379" \
  25. --insecure-port 0 \
  26. --logtostderr=true \
  27. --max-mutating-requests-inflight 2000 \
  28. --max-requests-inflight 4000 \
  29. --proxy-client-cert-file /etc/karmada/pki/front-proxy-client.crt \
  30. --proxy-client-key-file /etc/karmada/pki/front-proxy-client.key \
  31. --requestheader-allowed-names "front-proxy-client" \
  32. --requestheader-client-ca-file /etc/karmada/pki/front-proxy-ca.crt \
  33. --requestheader-extra-headers-prefix "X-Remote-Extra-" \
  34. --requestheader-group-headers "X-Remote-Group" \
  35. --requestheader-username-headers "X-Remote-User" \
  36. --runtime-config "api/all=true" \
  37. --secure-port 6443 \
  38. --service-account-issuer "https://kubernetes.default.svc.cluster.local" \
  39. --service-account-key-file /etc/karmada/pki/sa.pub \
  40. --service-account-signing-key-file /etc/karmada/pki/sa.key \
  41. --service-cluster-ip-range "10.254.0.0/16" \
  42. --tls-cert-file /etc/karmada/pki/kube-apiserver.crt \
  43. --tls-private-key-file /etc/karmada/pki/kube-apiserver.key \
  44. Restart=on-failure
  45. RestartSec=5
  46. Type=notify
  47. LimitNOFILE=65536
  48. [Install]
  49. WantedBy=multi-user.target

启动 kube-apiserver

3 个服务器必须执行以下命令:

  1. systemctl daemon-reload
  2. systemctl enable kube-apiserver.service
  3. systemctl start kube-apiserver.service
  4. systemctl status kube-apiserver.service

验证

  1. $ ./check_status.sh
  2. ###### 开始检查 kube-apiserver
  3. [+]ping ok
  4. [+]log ok
  5. [+]etcd ok
  6. [+]poststarthook/start-kube-apiserver-admission-initializer ok
  7. [+]poststarthook/generic-apiserver-start-informers ok
  8. [+]poststarthook/priority-and-fairness-config-consumer ok
  9. [+]poststarthook/priority-and-fairness-filter ok
  10. [+]poststarthook/start-apiextensions-informers ok
  11. [+]poststarthook/start-apiextensions-controllers ok
  12. [+]poststarthook/crd-informer-synced ok
  13. [+]poststarthook/bootstrap-controller ok
  14. [+]poststarthook/rbac/bootstrap-roles ok
  15. [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
  16. [+]poststarthook/priority-and-fairness-config-producer ok
  17. [+]poststarthook/start-cluster-authentication-info-controller ok
  18. [+]poststarthook/aggregator-reload-proxy-client-cert ok
  19. [+]poststarthook/start-kube-aggregator-informers ok
  20. [+]poststarthook/apiservice-registration-controller ok
  21. [+]poststarthook/apiservice-status-available-controller ok
  22. [+]poststarthook/kube-apiserver-autoregistration ok
  23. [+]autoregister-completion ok
  24. [+]poststarthook/apiservice-openapi-controller ok
  25. livez check passed
  26. ###### kube-apiserver 检查成功

安装 karmada-aggregated-apiserver

首先,创建 namespace 并绑定 cluster admin role。对 karmada-01 执行操作。

  1. kubectl create ns karmada-system
  2. kubectl create clusterrolebinding cluster-admin:karmada --clusterrole=cluster-admin --user system:karmada

然后,类似 karmada-webhook,为高可用使用 nginx

修改 nginx 配置并添加以下配置。对 karmada-01 执行以下操作。

  1. $ cat /usr/local/karmada-nginx/conf/nginx.conf
  2. worker_processes 2;
  3. events {
  4. worker_connections 1024;
  5. }
  6. stream {
  7. upstream backend {
  8. hash consistent;
  9. server 172.31.209.245:6443 max_fails=3 fail_timeout=30s;
  10. server 172.31.209.246:6443 max_fails=3 fail_timeout=30s;
  11. server 172.31.209.247:6443 max_fails=3 fail_timeout=30s;
  12. }
  13. upstream webhook {
  14. hash consistent;
  15. server 172.31.209.245:8443 max_fails=3 fail_timeout=30s;
  16. server 172.31.209.246:8443 max_fails=3 fail_timeout=30s;
  17. server 172.31.209.247:8443 max_fails=3 fail_timeout=30s;
  18. }
  19. upstream aa {
  20. hash consistent;
  21. server 172.31.209.245:7443 max_fails=3 fail_timeout=30s;
  22. server 172.31.209.246:7443 max_fails=3 fail_timeout=30s;
  23. server 172.31.209.247:7443 max_fails=3 fail_timeout=30s;
  24. }
  25. server {
  26. listen 172.31.209.245:5443;
  27. proxy_connect_timeout 1s;
  28. proxy_pass backend;
  29. }
  30. server {
  31. listen 172.31.209.245:4443;
  32. proxy_connect_timeout 1s;
  33. proxy_pass webhook;
  34. }
  35. server {
  36. listen 172.31.209.245:443;
  37. proxy_connect_timeout 1s;
  38. proxy_pass aa;
  39. }
  40. }

重新加载 nginx 配置。

  1. systemctl restart karmada-nginx

创建 Systemd 服务

karmada-01karmada-02karmada-03 执行操作。以 karmada-01 为例。

/usr/lib/systemd/system/karmada-aggregated-apiserver.service

  1. [Unit]
  2. Description=Karmada Aggregated ApiServer
  3. Documentation=https://github.com/karmada-io/karmada
  4. [Service]
  5. ExecStart=/usr/local/sbin/karmada-aggregated-apiserver \
  6. --audit-log-maxage 0 \
  7. --audit-log-maxbackup 0 \
  8. --audit-log-path - \
  9. --authentication-kubeconfig /etc/karmada/karmada.kubeconfig \
  10. --authorization-kubeconfig /etc/karmada/karmada.kubeconfig \
  11. --etcd-cafile /etc/karmada/pki/etcd/ca.crt \
  12. --etcd-certfile /etc/karmada/pki/etcd/apiserver-etcd-client.crt \
  13. --etcd-keyfile /etc/karmada/pki/etcd/apiserver-etcd-client.key \
  14. --etcd-servers "https://172.31.209.245:2379,https://172.31.209.246:2379,https://172.31.209.247:2379" \
  15. --feature-gates "APIPriorityAndFairness=false" \
  16. --kubeconfig /etc/karmada/karmada.kubeconfig \
  17. --logtostderr=true \
  18. --secure-port 7443 \
  19. --tls-cert-file /etc/karmada/pki/karmada.crt \
  20. --tls-private-key-file /etc/karmada/pki/karmada.key \
  21. Restart=on-failure
  22. RestartSec=5
  23. LimitNOFILE=65536
  24. [Install]
  25. WantedBy=multi-user.target

启动 karmada-aggregated-apiserver

  1. systemctl daemon-reload
  2. systemctl enable karmada-aggregated-apiserver.service
  3. systemctl start karmada-aggregated-apiserver.service
  4. systemctl status karmada-aggregated-apiserver.service

创建 APIService

externalNamenginx 所在的主机名 (karmada-01)。

(1) 创建文件:karmada-aggregated-apiserver-apiservice.yaml

  1. apiVersion: apiregistration.k8s.io/v1
  2. kind: APIService
  3. metadata:
  4. name: v1alpha1.cluster.karmada.io
  5. labels:
  6. app: karmada-aggregated-apiserver
  7. apiserver: "true"
  8. spec:
  9. insecureSkipTLSVerify: true
  10. group: cluster.karmada.io
  11. groupPriorityMinimum: 2000
  12. service:
  13. name: karmada-aggregated-apiserver
  14. namespace: karmada-system
  15. port: 443
  16. version: v1alpha1
  17. versionPriority: 10
  18. ---
  19. apiVersion: v1
  20. kind: Service
  21. metadata:
  22. name: karmada-aggregated-apiserver
  23. namespace: karmada-system
  24. spec:
  25. type: ExternalName
  26. externalName: karmada-01

(2) kubectl create -f karmada-aggregated-apiserver-apiservice.yaml

验证

  1. $ ./check_status.sh
  2. ###### 开始检查 karmada-aggregated-apiserver
  3. [+]ping ok
  4. [+]log ok
  5. [+]etcd ok
  6. [+]poststarthook/generic-apiserver-start-informers ok
  7. [+]poststarthook/max-in-flight-filter ok
  8. [+]poststarthook/start-aggregated-server-informers ok
  9. livez check passed
  10. ###### karmada-aggregated-apiserver 检查成功

安装 kube-controller-manager

karmada-01karmada-02karmada-03 执行操作。以 karmada-01 为例。

创建 Systemd 服务

/usr/lib/systemd/system/kube-controller-manager.service

  1. [Unit]
  2. Description=Kubernetes Controller Manager
  3. Documentation=https://kubernetes.io/docs/home/
  4. After=network.target
  5. [Service]
  6. ExecStart=/usr/local/sbin/kube-controller-manager \
  7. --authentication-kubeconfig /etc/karmada/kube-controller-manager.kubeconfig \
  8. --authorization-kubeconfig /etc/karmada/kube-controller-manager.kubeconfig \
  9. --bind-address "0.0.0.0" \
  10. --client-ca-file /etc/karmada/pki/server-ca.crt \
  11. --cluster-name karmada \
  12. --cluster-signing-cert-file /etc/karmada/pki/server-ca.crt \
  13. --cluster-signing-key-file /etc/karmada/pki/server-ca.key \
  14. --concurrent-deployment-syncs 10 \
  15. --concurrent-gc-syncs 30 \
  16. --concurrent-service-syncs 1 \
  17. --controllers "namespace,garbagecollector,serviceaccount-token" \
  18. --feature-gates "RotateKubeletServerCertificate=true" \
  19. --horizontal-pod-autoscaler-sync-period 10s \
  20. --kube-api-burst 2000 \
  21. --kube-api-qps 1000 \
  22. --kubeconfig /etc/karmada/kube-controller-manager.kubeconfig \
  23. --leader-elect \
  24. --logtostderr=true \
  25. --node-cidr-mask-size 24 \
  26. --pod-eviction-timeout 5m \
  27. --requestheader-allowed-names "front-proxy-client" \
  28. --requestheader-client-ca-file /etc/karmada/pki/front-proxy-ca.crt \
  29. --requestheader-extra-headers-prefix "X-Remote-Extra-" \
  30. --requestheader-group-headers "X-Remote-Group" \
  31. --requestheader-username-headers "X-Remote-User" \
  32. --root-ca-file /etc/karmada/pki/server-ca.crt \
  33. --service-account-private-key-file /etc/karmada/pki/sa.key \
  34. --service-cluster-ip-range "10.254.0.0/16" \
  35. --terminated-pod-gc-threshold 10000 \
  36. --tls-cert-file /etc/karmada/pki/kube-controller-manager.crt \
  37. --tls-private-key-file /etc/karmada/pki/kube-controller-manager.key \
  38. --use-service-account-credentials \
  39. --v 4 \
  40. Restart=on-failure
  41. RestartSec=5
  42. LimitNOFILE=65536
  43. [Install]
  44. WantedBy=multi-user.target

启动 kube-controller-manager

  1. systemctl daemon-reload
  2. systemctl enable kube-controller-manager.service
  3. systemctl start kube-controller-manager.service
  4. systemctl status kube-controller-manager.service

验证

  1. $ ./check_status.sh
  2. ###### 开始检查 kube-controller-manager
  3. [+]leaderElection ok
  4. healthz check passed
  5. ###### kube-controller-manager 检查成功

安装 karmada-controller-manager

创建 Systemd 服务

karmada-01karmada-02karmada-03 执行操作。以 karmada-01 为例。

/usr/lib/systemd/system/karmada-controller-manager.service

  1. [Unit]
  2. Description=Karmada Controller Manager
  3. Documentation=https://github.com/karmada-io/karmada
  4. [Service]
  5. ExecStart=/usr/local/sbin/karmada-controller-manager \
  6. --bind-address 0.0.0.0 \
  7. --cluster-status-update-frequency 10s \
  8. --kubeconfig /etc/karmada/karmada.kubeconfig \
  9. --logtostderr=true \
  10. --metrics-bind-address ":10358" \
  11. --secure-port 10357 \
  12. --v=4 \
  13. Restart=on-failure
  14. RestartSec=5
  15. LimitNOFILE=65536
  16. [Install]
  17. WantedBy=multi-user.target

启动 karmada-controller-manager

  1. systemctl daemon-reload
  2. systemctl enable karmada-controller-manager.service
  3. systemctl start karmada-controller-manager.service
  4. systemctl status karmada-controller-manager.service

验证

  1. $ ./check_status.sh
  2. ###### 开始检查 karmada-controller-manager
  3. [+]ping ok
  4. healthz check passed
  5. ###### karmada-controller-manager 检查成功

安装 karmada-scheduler

创建 Systemd Service

karmada-01karmada-02karmada-03 执行操作。以 karmada-01 为例。

/usr/lib/systemd/system/karmada-scheduler.service

  1. [Unit]
  2. Description=Karmada Scheduler
  3. Documentation=https://github.com/karmada-io/karmada
  4. [Service]
  5. ExecStart=/usr/local/sbin/karmada-scheduler \
  6. --bind-address 0.0.0.0 \
  7. --enable-scheduler-estimator=true \
  8. --kubeconfig /etc/karmada/karmada.kubeconfig \
  9. --logtostderr=true \
  10. --scheduler-estimator-port 10352 \
  11. --secure-port 10511 \
  12. --v=4 \
  13. Restart=on-failure
  14. RestartSec=5
  15. LimitNOFILE=65536
  16. [Install]
  17. WantedBy=multi-user.target

启动 karmada-scheduler

  1. systemctl daemon-reload
  2. systemctl enable karmada-scheduler.service
  3. systemctl start karmada-scheduler.service
  4. systemctl status karmada-scheduler.service

验证

  1. $ ./check_status.sh
  2. ###### 开始检查 karmada-scheduler
  3. ok
  4. ###### karmada-scheduler 检查成功

安装 karmada-webhook

karmada-webhook 不同于 schedulercontroller-manager,其高可用需要用 nginx 实现。

修改 nginx 配置并添加以下配置。对 karmada-01 执行以下操作。

  1. $ cat /usr/local/karmada-nginx/conf/nginx.conf
  2. worker_processes 2;
  3. events {
  4. worker_connections 1024;
  5. }
  6. stream {
  7. upstream backend {
  8. hash consistent;
  9. server 172.31.209.245:6443 max_fails=3 fail_timeout=30s;
  10. server 172.31.209.246:6443 max_fails=3 fail_timeout=30s;
  11. server 172.31.209.247:6443 max_fails=3 fail_timeout=30s;
  12. }
  13. upstream webhook {
  14. hash consistent;
  15. server 172.31.209.245:8443 max_fails=3 fail_timeout=30s;
  16. server 172.31.209.246:8443 max_fails=3 fail_timeout=30s;
  17. server 172.31.209.247:8443 max_fails=3 fail_timeout=30s;
  18. }
  19. server {
  20. listen 172.31.209.245:5443;
  21. proxy_connect_timeout 1s;
  22. proxy_pass backend;
  23. }
  24. server {
  25. listen 172.31.209.245:4443;
  26. proxy_connect_timeout 1s;
  27. proxy_pass webhook;
  28. }
  29. }

重新加载 nginx 配置。

  1. systemctl restart karmada-nginx

创建 Systemd 服务

karmada-01karmada-02karmada-03 执行操作。以 karmada-01 为例。

/usr/lib/systemd/system/karmada-webhook.service

  1. [Unit]
  2. Description=Karmada Webhook
  3. Documentation=https://github.com/karmada-io/karmada
  4. [Service]
  5. ExecStart=/usr/local/sbin/karmada-webhook \
  6. --bind-address 0.0.0.0 \
  7. --cert-dir /etc/karmada/pki \
  8. --health-probe-bind-address ":8444" \
  9. --kubeconfig /etc/karmada/karmada.kubeconfig \
  10. --logtostderr=true \
  11. --metrics-bind-address ":8445" \
  12. --secure-port 8443 \
  13. --tls-cert-file-name "karmada.crt" \
  14. --tls-private-key-file-name "karmada.key" \
  15. --v=4 \
  16. Restart=on-failure
  17. RestartSec=5
  18. LimitNOFILE=65536
  19. [Install]
  20. WantedBy=multi-user.target

启动 karmada-webook

  1. systemctl daemon-reload
  2. systemctl enable karmada-webhook.service
  3. systemctl start karmada-webhook.service
  4. systemctl status karmada-webhook.service

配置 karmada-webhook

下载 webhook-configuration.yaml 文件:https://github.com/karmada-io/karmada/blob/master/artifacts/deploy/webhook-configuration.yaml

  1. ca_string=$(cat /etc/karmada/pki/server-ca.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g)
  2. sed -i "s/{{caBundle}}/${ca_string}/g" webhook-configuration.yaml
  3. # 你需要将 172.31.209.245:4443 更改为你的负载均衡器 host:port。
  4. sed -i 's/karmada-webhook.karmada-system.svc:443/172.31.209.245:4443/g' webhook-configuration.yaml
  5. kubectl create -f webhook-configuration.yaml

验证

  1. $ ./check_status.sh
  2. ###### 开始检查 karmada-webhook
  3. ok
  4. ###### karmada-webhook 检查成功

初始化 Karmada

karmada-01 执行以下操作。

  1. git clone https://github.com/karmada-io/karmada
  2. cd karmada/charts/karmada/_crds/bases
  3. kubectl apply -f .
  4. cd ../patches/
  5. ca_string=$(cat /etc/karmada/pki/server-ca.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g)
  6. sed -i "s/{{caBundle}}/${ca_string}/g" webhook_in_resourcebindings.yaml
  7. sed -i "s/{{caBundle}}/${ca_string}/g" webhook_in_clusterresourcebindings.yaml
  8. # 你需要将 172.31.209.245:4443 更改为你的负载均衡器 host:port。
  9. sed -i 's/karmada-webhook.karmada-system.svc:443/172.31.209.245:4443/g' webhook_in_resourcebindings.yaml
  10. sed -i 's/karmada-webhook.karmada-system.svc:443/172.31.209.245:4443/g' webhook_in_clusterresourcebindings.yaml
  11. kubectl patch CustomResourceDefinition resourcebindings.work.karmada.io --patch-file webhook_in_resourcebindings.yaml
  12. kubectl patch CustomResourceDefinition clusterresourcebindings.work.karmada.io --patch-file webhook_in_clusterresourcebindings.yaml

此时,Karmada基础组件已经安装完毕,此时,你可以接入集群;如果想使用karmadactl聚合查询,需要运行如下命令:

  1. cat <<EOF | kubectl --kubeconfig=/etc/karmada/admin.kubeconfig apply -f -
  2. apiVersion: rbac.authorization.k8s.io/v1
  3. kind: ClusterRole
  4. metadata:
  5. name: cluster-proxy-clusterrole
  6. rules:
  7. - apiGroups:
  8. - 'cluster.karmada.io'
  9. resources:
  10. - clusters/proxy
  11. verbs:
  12. - '*'
  13. ---
  14. apiVersion: rbac.authorization.k8s.io/v1
  15. kind: ClusterRoleBinding
  16. metadata:
  17. name: cluster-proxy-clusterrolebinding
  18. roleRef:
  19. apiGroup: rbac.authorization.k8s.io
  20. kind: ClusterRole
  21. name: cluster-proxy-clusterrole
  22. subjects:
  23. - kind: User
  24. name: admin #这里为证书的用户名
  25. # The token generated by the serviceaccount can parse the group information. Therefore, you need to specify the group information below.
  26. - kind: Group
  27. name: "system:masters" #这里为证书的group信息
  28. EOF

安装 karmada-scheduler-estimator(选装)

karmada-scheduler 使用 gRPC 访问 karmada-scheduler-estimator。你可以将 “Linux 虚拟服务器”用作负载均衡器。

你需要先部署上述组件并将成员集群接入到 Karmada 控制面。请参见安装 Karmada 到你自己的集群

创建 Systemd 服务

在以下示例中,”/etc/karmada/physical-machine-karmada-member-1.kubeconfig” 是已接入成员集群的 kubeconfig 文件。

karmada-01karmada-02karmada-03 执行操作。以 karmada-01 为例。

/usr/lib/systemd/system/karmada-scheduler-estimator.service

  1. [Unit]
  2. Description=Karmada Scheduler Estimator
  3. Documentation=https://github.com/karmada-io/karmada
  4. [Service]
  5. # 你需要更改 `--cluster-name` `--kubeconfig`
  6. ExecStart=/usr/local/sbin/karmada-scheduler-estimator \
  7. --cluster-name "physical-machine-karmada-member-1" \
  8. --kubeconfig "/etc/karmada/physical-machine-karmada-member-1.kubeconfig" \
  9. --logtostderr=true \
  10. --server-port 10352 \
  11. Restart=on-failure
  12. RestartSec=5
  13. LimitNOFILE=65536
  14. [Install]
  15. WantedBy=multi-user.target

启动 karmada-scheduler-estimator

  1. systemctl daemon-reload
  2. systemctl enable karmada-scheduler-estimator.service
  3. systemctl start karmada-scheduler-estimator.service
  4. systemctl status karmada-scheduler-estimator.service

创建服务

(1) 创建 karmada-scheduler-estimator.yaml

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: karmada-scheduler-estimator-{{MEMBER_CLUSTER_NAME}}
  5. namespace: karmada-system
  6. labels:
  7. cluster: {{MEMBER_CLUSTER_NAME}}
  8. spec:
  9. ports:
  10. - protocol: TCP
  11. port: {{PORT}}
  12. targetPort: {{TARGET_PORT}}
  13. type: ExternalName
  14. externalName: {{EXTERNAL_NAME}}

{{PORT}}: “—scheduler-estimator-port” parameter of “karmada-scheduler”.

{{TARGET_PORT}}: LoadBalancer IP.

{{EXTERNAL_NAME}}: LoadBalancer Host.

{{MEMBER_CLUSTER_NAME}}: Member Cluster Name

(2) 创建服务

  1. kubectl create --kubeconfig "/etc/karmada/admin.kubeconfig" -f karmada-scheduler-estimator.yaml

验证

  1. $ ./check_status.sh
  2. ###### 开始检查 karmada-scheduler-estimator
  3. ok
  4. ###### karmada-scheduler-estimator 检查成功

故障排查

  1. karmada-scheduler 需要一些时间才能找到新的 karmada-scheduler-estimator。如果你不想等,可以直接重启 karmada-scheduler。
  1. systemctl restart karmada-scheduler.service

安装 karmada-search(选装)

创建 Systemd 服务

karmada-01karmada-02karmada-03 执行操作。以 karmada-01 为例。

/usr/lib/systemd/system/karmada-search.service

  1. [Unit]
  2. Description=Karmada Search
  3. Documentation=https://github.com/karmada-io/karmada
  4. [Service]
  5. ExecStart=/usr/local/sbin/karmada-search \
  6. --audit-log-maxage 0 \
  7. --audit-log-maxbackup 0 \
  8. --audit-log-path - \
  9. --authentication-kubeconfig /etc/karmada/karmada.kubeconfig \
  10. --authorization-kubeconfig /etc/karmada/karmada.kubeconfig \
  11. --etcd-cafile /etc/karmada/pki/etcd/ca.crt \
  12. --etcd-certfile /etc/karmada/pki/etcd/apiserver-etcd-client.crt \
  13. --etcd-keyfile /etc/karmada/pki/etcd/apiserver-etcd-client.key \
  14. --etcd-servers "https://172.31.209.245:2379,https://172.31.209.246:2379,https://172.31.209.247:2379" \
  15. --feature-gates "APIPriorityAndFairness=false" \
  16. --kubeconfig /etc/karmada/karmada.kubeconfig \
  17. --logtostderr=true \
  18. --secure-port 9443 \
  19. --tls-cert-file /etc/karmada/pki/karmada.crt \
  20. --tls-private-key-file /etc/karmada/pki/karmada.key \
  21. Restart=on-failure
  22. RestartSec=5
  23. LimitNOFILE=65536
  24. [Install]
  25. WantedBy=multi-user.target
  1. systemctl daemon-reload
  2. systemctl enable karmada-search.service
  3. systemctl start karmada-search.service
  4. systemctl status karmada-search.service

验证

  1. $ ./check_status.sh
  2. ###### 开始检查 karmada-search
  3. [+]ping ok
  4. [+]log ok
  5. [+]etcd ok
  6. [+]poststarthook/generic-apiserver-start-informers ok
  7. [+]poststarthook/max-in-flight-filter ok
  8. [+]poststarthook/start-karmada-search-informers ok
  9. [+]poststarthook/start-karmada-informers ok
  10. [+]poststarthook/start-karmada-search-controller ok
  11. [+]poststarthook/start-karmada-proxy-controller ok
  12. livez check passed
  13. ###### karmada-search 检查成功

配置

代理全局资源