Import an AWS EKS Cluster

This tutorial demonstrates how to import an AWS EKS cluster through the direct connection method. If you want to use the agent connection method, refer to Agent Connection.

Prerequisites

  • You have a Kubernetes cluster with KubeSphere installed, and prepared this cluster as the host cluster. For more information about how to prepare a host cluster, refer to Prepare a host cluster.
  • You have an EKS cluster to be used as the member cluster.

Import an EKS Cluster

Step 1: Deploy KubeSphere on your EKS cluster

You need to deploy KubeSphere on your EKS cluster first. For more information about how to deploy KubeSphere on EKS, refer to Deploy KubeSphere on AWS EKS.

Step 2: Prepare the EKS member cluster

  1. In order to manage the member cluster from the host cluster, you need to make jwtSecret the same between them. Therefore, get it first by executing the following command on your host cluster.

    1. kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret

    The output is similar to the following:

    1. jwtSecret: "QVguGh7qnURywHn2od9IiOX6X8f8wK8g"
  2. Log in to the KubeSphere console of the EKS cluster as admin. Click Platform in the upper-left corner and then select Cluster Management.

  3. Go to CRDs, enter ClusterConfiguration in the search bar, and then press Enter on your keyboard. Click ClusterConfiguration to go to its detail page.

  4. Click Import an AWS EKS Cluster - 图1 on the right and then select Edit YAML to edit ks-installer.

  5. In the YAML file of ks-installer, change the value of jwtSecret to the corresponding value shown above and set the value of clusterRole to member. Click Update to save your changes.

    1. authentication:
    2. jwtSecret: QVguGh7qnURywHn2od9IiOX6X8f8wK8g
    1. multicluster:
    2. clusterRole: member

    Note

    Make sure you use the value of your own jwtSecret. You need to wait for a while so that the changes can take effect.

Step 3: Create a new kubeconfig file

  1. Amazon EKS doesn’t provide a built-in kubeconfig file as a standard kubeadm cluster does. Nevertheless, you can create a kubeconfig file by referring to this document. The generated kubeconfig file will be like the following:

    1. apiVersion: v1
    2. clusters:
    3. - cluster:
    4. server: <endpoint-url>
    5. certificate-authority-data: <base64-encoded-ca-cert>
    6. name: kubernetes
    7. contexts:
    8. - context:
    9. cluster: kubernetes
    10. user: aws
    11. name: aws
    12. current-context: aws
    13. kind: Config
    14. preferences: {}
    15. users:
    16. - name: aws
    17. user:
    18. exec:
    19. apiVersion: client.authentication.k8s.io/v1alpha1
    20. command: aws
    21. args:
    22. - "eks"
    23. - "get-token"
    24. - "--cluster-name"
    25. - "<cluster-name>"
    26. # - "--role"
    27. # - "<role-arn>"
    28. # env:
    29. # - name: AWS_PROFILE
    30. # value: "<aws-profile>"

    However, this automatically generated kubeconfig file requires the command aws (aws CLI tools) to be installed on every computer that wants to use this kubeconfig.

  2. Run the following commands on your local computer to get the token of the ServiceAccount kubesphere created by KubeSphere. It has the cluster admin access to the cluster and will be used as the new kubeconfig token.

    1. TOKEN=$(kubectl -n kubesphere-system get secret $(kubectl -n kubesphere-system get sa kubesphere -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 -d)
    2. kubectl config set-credentials kubesphere --token=${TOKEN}
    3. kubectl config set-context --current --user=kubesphere
  3. Retrieve the new kubeconfig file by running the following command:

    1. cat ~/.kube/config

    The output is similar to the following and you can see that a new user kubesphere is inserted and set as the current-context user:

    1. apiVersion: v1
    2. clusters:
    3. - cluster:
    4. certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZ...S0tLQo=
    5. server: https://*.sk1.cn-north-1.eks.amazonaws.com.cn
    6. name: arn:aws-cn:eks:cn-north-1:660450875567:cluster/EKS-LUSLVMT6
    7. contexts:
    8. - context:
    9. cluster: arn:aws-cn:eks:cn-north-1:660450875567:cluster/EKS-LUSLVMT6
    10. user: kubesphere
    11. name: arn:aws-cn:eks:cn-north-1:660450875567:cluster/EKS-LUSLVMT6
    12. current-context: arn:aws-cn:eks:cn-north-1:660450875567:cluster/EKS-LUSLVMT6
    13. kind: Config
    14. preferences: {}
    15. users:
    16. - name: arn:aws-cn:eks:cn-north-1:660450875567:cluster/EKS-LUSLVMT6
    17. user:
    18. exec:
    19. apiVersion: client.authentication.k8s.io/v1alpha1
    20. args:
    21. - --region
    22. - cn-north-1
    23. - eks
    24. - get-token
    25. - --cluster-name
    26. - EKS-LUSLVMT6
    27. command: aws
    28. env: null
    29. - name: kubesphere
    30. user:
    31. token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImlCRHF4SlE5a0JFNDlSM2xKWnY1Vkt5NTJrcDNqRS1Ta25IYkg1akhNRmsifQ.eyJpc3M................9KQtFULW544G-FBwURd6ArjgQ3Ay6NHYWZe3gWCHLmag9gF-hnzxequ7oN0LiJrA-al1qGeQv-8eiOFqX3RPCQgbybmix8qw5U6f-Rwvb47-xA

    You can run the following command to check that the new kubeconfig does have access to the EKS cluster.

    1. kubectl get nodes

    The output is simialr to this:

    1. NAME STATUS ROLES AGE VERSION
    2. ip-10-0-47-38.cn-north-1.compute.internal Ready <none> 11h v1.18.8-eks-7c9bda
    3. ip-10-0-8-148.cn-north-1.compute.internal Ready <none> 78m v1.18.8-eks-7c9bda

Step 4: Import the EKS member cluster

  1. Log in to the KubeSphere console on your host cluster as admin. Click Platform in the upper-left corner and then select Cluster Management. On the Cluster Management page, click Add Cluster.

  2. Enter the basic information based on your needs and click Next.

  3. In Connection Method, select Direct connection. Fill in the new kubeconfig file of the EKS member cluster and then click Create.

  4. Wait for cluster initialization to finish.

Feedback

Was this page Helpful?

Yes No