Install Kubernetes and KubeSphere on Linux

This section explains how to install Kubernetes and KubeSphere.

The installation process will use the open-source tool KubeKey. For more information about KubeKey, please visit the GitHub KubeKey repository.

Note

You can also run the command in the Install KubeSphere section to directly upgrade KubeSphere from v4.1.1 to v4.1.2.

Prerequisites

  • Prepare at least 1 Linux server as a cluster node. In a production environment, to ensure high availability of the cluster, it is recommended to prepare at least 5 Linux servers, with 3 servers as control plane nodes and 2 servers as worker nodes. If you are installing KubeSphere on multiple Linux servers, make sure all servers belong to the same subnet.

  • The operating system and version of the cluster nodes must be Ubuntu 16.04, Ubuntu 18.04, Ubuntu 20.04, Ubuntu 22.04, Debian 9, Debian 10, CentOS 7, CentOS Stream, RHEL 7, RHEL 8, SLES 15, or openSUSE Leap 15. The operating systems of multiple servers can be different. For support of other operating systems and versions, please contact KubeSphere technical support.

  • In a production environment, to ensure the cluster has sufficient computing and storage resources, it is recommended that each cluster node be configured with at least 8 CPU cores, 16 GB of memory, and 200 GB of disk space. In addition, it is recommended to mount an additional 200 GB of disk space in the /var/lib/docker (for Docker) or /var/lib/containerd (for containerd) directory of each cluster node for storing container runtime data.

  • In a production environment, it is recommended to configure high availability for the KubeSphere cluster in advance to avoid service interruption in the event of a single control plane node failure. For more information, please refer to the Configure High Availability.

    Note

    If you plan to have multiple control plane nodes, be sure to configure high availability for the cluster in advance.

  • By default, KubeSphere uses the local disk space of the cluster nodes as persistent storage. In a production environment, it is recommended to configure an external storage system as persistent storage in advance. For more information, please refer to the Configure External Persistent Storage.

  • If the cluster nodes do not have a container runtime installed, the installation tool KubeKey will automatically install Docker as the container runtime for each cluster node during the installation process. You can also manually install containerd, CRI-O, or iSula as the container runtime in advance.

    Note

    CRI-O and iSula have not been fully tested for compatibility with KubeSphere, and there may be unknown issues.

  • Make sure the DNS server addresses configured in the /etc/resolv.conf file on all cluster nodes are available. Otherwise, the KubeSphere cluster may encounter domain name resolution issues.

  • Make sure you can use the sudo, curl, and openssl commands on all cluster nodes.

  • Make sure the time is synchronized on all cluster nodes.

Configure Firewall Rules

KubeSphere requires specific ports and protocols for communication between services. If your infrastructure environment has enabled a firewall, you need to open the required ports and protocols in the firewall settings. If your infrastructure environment does not have a firewall enabled, you can skip this step.

The following table lists the ports and protocols that need to be opened in the firewall.

ServiceProtocolStart PortEnd PortRemarks

ssh

TCP

22

etcd

TCP

2379

2380

apiserver

TCP

6443

calico

TCP

9099

9100

bgp

TCP

179

nodeport

TCP

30000

32767

master

TCP

10250

10258

dns

TCP

53

dns

UDP

53

metrics-server

TCP

8443

local-registry

TCP

5000

Required for offline environments

local-apt

TCP

5080

Required for offline environments

rpcbind

TCP

111

Required when using NFS as persistent storage

ipip

IPENCAP/IPIP

Required when using Calico

Install Dependencies

You need to install socat, conntrack, ebtables, and ipset for all cluster nodes. If the above dependencies already exist on each cluster node, you can skip this step.

On Ubuntu systems, run the following command to install the dependencies on the servers:

  1. sudo apt install socat conntrack ebtables ipset -y

If the cluster nodes use other operating systems, replace apt with the corresponding package management tool for the operating system.

Install Kubernetes

  1. If you are accessing GitHub/Googleapis from a restricted location, please log in to any cluster node and run the following command to set the download region:

    1. export KKZONE=cn
  2. Run the following command to download the latest version of KubeKey:

    1. curl -sfL https://get-kk.kubesphere.io | sh -

    After the download is complete, a KubeKey binary file kk will be generated in the current directory.

    Note

    If the cluster node used to perform the operations cannot connect to the internet, you can manually download KubeKey on a device with internet access and then transfer it to the cluster node.

  3. Add execute permission to the KubeKey binary file kk:

    1. sudo chmod +x kk
  4. Create the installation configuration file config-sample.yaml:

    1. ./kk create config --with-kubernetes <Kubernetes version>

    Replace <Kubernetes version> with the actual version needed, for example v1.27.4. KubeSphere by default supports Kubernetes v1.21~1.28.

    After the command completes, the installation configuration file config-sample.yaml will be generated.

    Note

    After KubeSphere is installed, please do not delete config-sample.yaml. This file will still be used for subsequent operations such as adding nodes. If this file is missing, you will need to recreate it.

  5. Edit config-sample.yaml

    1. vi config-sample.yaml

    The following is a part of the configuration file sample. For a complete example, please refer to this file.

    1. apiVersion: kubekey.kubesphere.io/v1alpha2
    2. Kind: Cluster
    3. metadata:
    4. name: sample
    5. spec:
    6. hosts:
    7. - {name: controlplane1, address: 192.168.0.2, internalAddress: 192.168.0.2, port: 23, user: ubuntu, password: Testing123, arch: arm64} # For arm64 nodes, please add the parameter arch: arm64
    8. - {name: controlplane2, address: 192.168.0.3, internalAddress: 192.168.0.3, user: ubuntu, privateKeyPath: "~/.ssh/id_rsa"}
    9. - {name: worker1, address: 192.168.0.4, internalAddress: 192.168.0.4, user: ubuntu, password: Testing123}
    10. - {name: worker2, address: 192.168.0.5, internalAddress: 192.168.0.5, user: ubuntu, password: Testing123}
    11. - {name: registry, address: 192.168.0.6, internalAddress: 192.168.0.6, user: ubuntu, password: Testing123}
    12. roleGroups:
    13. etcd:
    14. - controlplane1
    15. - controlplane2
    16. control-plane:
    17. - controlplane1
    18. - controlplane2
    19. worker:
    20. - worker1
    21. - worker2
    22. # If you want to use kk to automatically deploy the image registry, please set up the registry (it is recommended that the image registry and cluster nodes be deployed separately to reduce mutual influence)
    23. registry:
    24. -registry
    25. controlPlaneEndpoint:
    26. internalLoadbalancer: haproxy # If you need to deploy a high availability cluster and no load balancer is available, you can enable this parameter to perform load balancing within the cluster.
    27. domain: lb.kubesphere.local
    28. address: ""
    29. port: 6443
    30. kubernetes:
    31. version: v1.23.15
    32. clusterName: cluster.local
    33. network:
    34. plugin: calico
    35. kubePodsCIDR: 10.233.64.0/18
    36. kubeServiceCIDR: 10.233.0.0/18
    37. ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    38. enableMultusCNI: false
    39. registry:
    40. # If you want to use kk to deploy harbor, you can set this parameter to harbor. If you do not set this parameter and you need to use kk to deploy the image registry, docker registry will be deployed by default.
    41. # Harbor does not support arm64. This parameter does not need to be configured when deploying in an arm64 environment.
    42. type: harbor
    43. # If you use kk to deploy harbor or other registries that require authentication, you need to set the auths of the corresponding registries. If you use kk to deploy the default docker registry, you do not need to configure the auths parameter.
    44. # Note: If you use kk to deploy harbor, please set the auths parameter after creating the harbor project.
    45. auths:
    46. "dockerhub.kubekey.local":
    47. username: admin # harbor default username
    48. password: Harbor12345 # harbor default password
    49. plainHTTP: false # If the registry uses http, please set this parameter to true
    50. privateRegistry: "dockerhub.kubekey.local/kse" #Set the private registry address used during cluster deployment
    51. registryMirrors: []
    52. insecureRegistries: []
    53. addons: []
  6. Set the information of each server under the spec:hosts parameter in config-sample.yaml.

    ParameterDescription

    name

    User-defined server name.

    address

    The SSH login IP address of the server.

    internalAddress

    The IP address of the server within the subnet.

    port

    The SSH port number of the server. This parameter does not need to be set if using the default port 22.

    user

    The SSH login user name of the server, which must be the root user or another user with sudo permissions. If you use root user, you can not set this parameter.

    password

    The server’s SSH login password. This parameter does not need to be set if privateKeyPath has been set.

    privateKeyPath

    The path to the server’s SSH login key. This parameter does not need to be set if password has been set.

    arch

    The server architecture. If the server’s hardware architecture is Arm64, please set this parameter to arm64, otherwise do not set this parameter. By default, the installation package only supports scenarios where all cluster nodes are x86_64 or arm64 architecture. If the hardware architecture of each cluster node is not exactly the same, please contact the KubeSphere technical support team.

  7. Set the server’s role under the spec:roleGroups parameter in config-sample.yaml.

    ParameterDescription

    etcd

    Nodes for installing the etcd database. Set the cluster control plane nodes under this parameter.

    control-plane

    Cluster control plane nodes. If you have configured high availability for the cluster, you can set multiple control plane nodes.

    worker

    Cluster worker nodes.

    registry

    Server used for creating a private image registry. This server is not used as a cluster node. During the installation or upgrade of KubeSphere, if the cluster nodes cannot connect to the Internet, you need to set the server used for creating a private image registry under this parameter. Otherwise, you can comment out this parameter.

  8. If you have multiple control plane nodes, set high availability information under the spec:controlPlaneEndpoint parameter in config-sample.yaml.

    ParameterDescription

    internalLoadBalancer

    Type of internal load balancer. If using local load balancer configuration, set this parameter to haproxy. Otherwise, you can comment out this parameter.

    domain

    Internal access domain for the load balancer. Set this parameter to lb.kubesphere.local.

    address

    IP address of the load balancer. If using local load balancer configuration, leave this parameter empty. If using a dedicated load balancer, set this parameter to the IP address of the load balancer. If using a generic server as the load balancer, set this parameter to the floating IP address of the load balancer.

    port

    Port number that the load balancer listens on, which is the port number of the apiserver service. Set this parameter to 6443.

  9. If you need to use external persistence storage, set the external persistence storage information under the spec:addons parameter in config-sample.yaml.

    • If using a cloud storage device, set the following parameters under spec:addons (replace <configuration file path> with the actual path of the storage plugin configuration file):

      1. - name: csi-qingcloud
      2. namespace: kube-system
      3. sources:
      4. chart:
      5. name: csi-qingcloud
      6. repo: https://charts.kubesphere.io/test
      7. valuesFile: <configuration file path>
    • If using NeonSAN storage, set the following parameters under spec:addons (replace <configuration file path> with the actual path of the storage plugin configuration file):

      1. - name: csi-neonsan
      2. namespace: kube-system
      3. sources:
      4. chart:
      5. name: csi-neonsan
      6. repo: https://charts.kubesphere.io/test
      7. valuesFile: <configuration file path>
    • If using an NFS storage system, set the following parameters under spec:addons (replace <configuration file path> with the actual path of the storage plugin configuration file):

      1. - name: nfs-client
      2. namespace: kube-system
      3. sources:
      4. chart:
      5. name: nfs-client-provisioner
      6. repo: https://charts.kubesphere.io/main
      7. valuesFile: <configuration file path>
  10. Create a Kubernetes cluster.

    1. ./kk create cluster -f config-sample.yaml
    Note

    If you want to use OpenEBS LocalPV, you can add the —with-local-storage parameter after the following command. If you want to integrate with other storage solutions, configure the relevant storage plugins under the spec:addons parameter in config-sample.yaml or install them after the Kubernetes deployment.

    If you see the following information, it means that the Kubernetes cluster is successfully created.

    1. Pipeline[CreateclusterPipeline] execute successfully

Install KubeSphere

KubeSphere Core (ks-core) is the core component of KubeSphere, providing a foundational runtime environment for extensions. Once KubeSphere Core is installed, you can access the KubeSphere web console.

  1. Helm should have been installed in advance. For specific instructions, please refer to Helm Installation.

    1. curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
  2. Run the following command on the cluster node to install KubeSphere Core.

    1. helm upgrade --install -n kubesphere-system --create-namespace ks-core https://charts.kubesphere.io/main/ks-core-1.1.2.tgz --debug --wait
    Note

    If you are accessing Docker Hub from a restricted location, add the following configuration after the above command to modify the default image pull address.

    1. set global.imageRegistry=swr.cn-southwest-2.myhuaweicloud.com/ks
    1. set extension.imageRegistry=swr.cn-southwest-2.myhuaweicloud.com/ks

    If you see the following information, it means that ks-core installation is successful:

    1. NOTES:
    2. Thank you for choosing KubeSphere Helm Chart.
    3. Please be patient and wait for several seconds for the KubeSphere deployment to complete.
    4. 1. Wait for Deployment Completion
    5. Confirm that all KubeSphere components are running by executing the following command:
    6. kubectl get pods -n kubesphere-system
    7. 2. Access the KubeSphere Console
    8. Once the deployment is complete, you can access the KubeSphere console using the following URL:
    9. http://192.168.6.10:30880
    10. 3. Login to KubeSphere Console
    11. Use the following credentials to log in:
    12. Account: admin
    13. Password: P@88w0rd
    14. NOTE: It is highly recommended to change the default password immediately after the first login.
    15. For additional information and details, please visit https://kubesphere.io.
  3. From the successful information, retrieve the Console, Account, and Password parameters to obtain the IP address of the KubeSphere web console, the admin username, and the admin password. Use a web browser to log in to the KubeSphere web console.

    Note

    Depending on your hardware and network environment, you may need to configure traffic forwarding rules and open port 30880 in the firewall.