Deploy KubeSphere on Azure VM Instances

Using the Azure cloud platform, you can either install and manage Kubernetes by yourself or adopt a managed Kubernetes solution. If you want to use a fully-managed platform solution, see Deploy KubeSphere on AKS for more details.

Alternatively, you can set up a highly-available cluster on Azure instances. This tutorial demonstrates how to create a production-ready Kubernetes and KubeSphere cluster.

Introduction

This tutorial uses two key features of Azure virtual machines (VMs):

  • Virtual Machine Scale Sets (VMSS): Azure VMSS let you create and manage a group of load balanced VMs. The number of VM instances can automatically increase or decrease in response to demand or a defined schedule (Kubernetes Autoscaler is available, but not covered in this tutorial. See autoscaler for more details), which perfectly fits Worker nodes.
  • Availability Sets: An availability set is a logical grouping of VMs within a datacenter that are automatically distributed across fault domains. This approach limits the impact of potential physical hardware failures, network outages, or power interruptions. All the Master and etcd VMs will be placed in an availability set to achieve high availability.

Besides these VMs, other resources like Load Balancer, Virtual Network and Network Security Group will also be used.

Prerequisites

  • You need an Azure account to create all the resources.
  • Basic knowledge of Azure Resource Manager (ARM) templates, which are files that define the infrastructure and configuration for your project.
  • For a production environment, it is recommended that you prepare persistent storage and create a StorageClass in advance. For development and testing, you can use OpenEBS, which is installed by KubeKey by default, to provision LocalPV directly.

Architecture

Six machines of Ubuntu 18.04 will be deployed in an Azure Resource Group. Three of them are grouped into an availability set, serving as both the control plane and etcd nodes. The other three VMs will be defined as a VMSS where Worker nodes will be running.

Architecture

These VMs will be attached to a load balancer. There are two predefined rules in the load balancer:

  • Inbound NAT: The SSH port will be mapped for each machine so that you can easily manage VMs.
  • Load Balancing: The http and https ports will be mapped to Node pools by default. Other ports can be added on demand.
ServiceProtocolRuleBackend PortFrontend Port/PortsPools
sshTCPInbound NAT2250200, 50201, 50202, 50100~50199Master, Node
apiserverTCPLoad Balancing64436443Master
ks-consoleTCPLoad Balancing3088030880Master
httpTCPLoad Balancing8080Node
httpsTCPLoad Balancing443443Node

Create HA Cluster Infrastructrue

You don’t have to create these resources one by one. According to the best practice of infrastructure as code on Azure, all resources in the architecture are already defined as ARM templates.

Prepare machines

  1. Click the Deploy button below, and you will be redirected to Azure and asked to fill in deployment parameters.

    Deploy to Azure Visualize

  2. On the page that appears, only few parameters need to be changed. Click Create new under Resource group and enter a name such as KubeSphereVMRG.

  3. Enter Admin Username.

  4. Copy your public SSH key for the field Admin Key. Alternatively, create a new one with ssh-keygen.

    azure-template-parameters

    Note

    Password authentication is restricted in Linux configurations. Only SSH is acceptable.

  5. Click Purchase at the bottom to continue.

Review Azure resources in the Portal

After successfully created, all the resources will display in the resource group KubeSphereVMRG. Record the public IP of the load balancer and the private IP addresses of the VMs. You will need them later.

New Created Resources

Deploy Kubernetes and KubeSphere

Execute the following commands on your device or connect to one of the Master VMs through SSH. During the installation, files will be downloaded and distributed to each VM.

  1. # copy your private ssh to master-0
  2. scp -P 50200 ~/.ssh/id_rsa kubesphere@40.81.5.xx:/home/kubesphere/.ssh/
  3. # ssh to the master-0
  4. ssh -i .ssh/id_rsa2 -p50200 kubesphere@40.81.5.xx

Download KubeKey

Kubekey is a brand-new installation tool which provides an easy, fast and flexible way to install Kubernetes and KubeSphere.

  1. Download it so that you can generate a configuration file in the next step.

    Download KubeKey from its GitHub Release Page or use the following command directly:

    1. curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -

    Run the following command first to make sure you download KubeKey from the correct zone.

    1. export KKZONE=cn

    Run the following command to download KubeKey:

    1. curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -

    Note

    After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run export KKZONE=cn again before you proceed with the steps below.

    Note

    The commands above download the latest release of KubeKey. You can change the version number in the command to download a specific version.

    Make kk executable:

    1. chmod +x kk
  2. Create an example configuration file with default configurations. Here Kubernetes v1.22.12 is used as an example.

    1. ./kk create config --with-kubesphere v3.4.0 --with-kubernetes v1.22.12

    Note

    • Recommended Kubernetes versions for KubeSphere 3.4: v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see Support Matrix.

    • If you do not add the flag --with-kubesphere in the command in this step, KubeSphere will not be deployed unless you install it using the addons field in the configuration file or add this flag again when you use ./kk create cluster later.

    • If you add the flag --with-kubesphere without specifying a KubeSphere version, the latest version of KubeSphere will be installed.

Example configurations

  1. spec:
  2. hosts:
  3. - {name: master-0, address: 40.81.5.xx, port: 50200, internalAddress: 10.0.1.4, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
  4. - {name: master-1, address: 40.81.5.xx, port: 50201, internalAddress: 10.0.1.5, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
  5. - {name: master-2, address: 40.81.5.xx, port: 50202, internalAddress: 10.0.1.6, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
  6. - {name: node000000, address: 40.81.5.xx, port: 50100, internalAddress: 10.0.0.4, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
  7. - {name: node000001, address: 40.81.5.xx, port: 50101, internalAddress: 10.0.0.5, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
  8. - {name: node000002, address: 40.81.5.xx, port: 50102, internalAddress: 10.0.0.6, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
  9. roleGroups:
  10. etcd:
  11. - master-0
  12. - master-1
  13. - master-2
  14. control-plane:
  15. - master-0
  16. - master-1
  17. - master-2
  18. worker:
  19. - node000000
  20. - node000001
  21. - node000002

For more information, see this file.

Configure the load balancer

In addition to node information, you need to configure your load balancer in the same YAML file. For the IP address, you can find it in Azure > KubeSphereVMRG > PublicLB. Assume the IP address and listening port of the load balancer are 40.81.5.xx and 6443 respectively, and you can refer to the following example.

  1. ## Public LB config example
  2. ## apiserver_loadbalancer_domain_name: "lb.kubesphere.local"
  3. controlPlaneEndpoint:
  4. domain: lb.kubesphere.local
  5. address: "40.81.5.xx"
  6. port: 6443

Note

The public load balancer is used directly instead of an internal load balancer due to Azure Load Balancer limits.

Persistent storage plugin configurations

See Persistent Storage Configurations for details.

Configure the network plugin

Azure Virtual Network doesn’t support the IPIP mode used by Calico. You need to change the network plugin to flannel.

  1. network:
  2. plugin: flannel
  3. kubePodsCIDR: 10.233.64.0/18
  4. kubeServiceCIDR: 10.233.0.0/18

Create a cluster

  1. After you complete the configuration, you can execute the following command to start the installation:

    1. ./kk create cluster -f config-sample.yaml
  2. Inspect the logs of installation:

    1. kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
  3. When the installation finishes, you can see the following message:

    1. #####################################################
    2. ### Welcome to KubeSphere! ###
    3. #####################################################
    4. Console: http://10.128.0.44:30880
    5. Account: admin
    6. Password: P@88w0rd
    7. NOTES
    8. 1. After you log into the console, please check the
    9. monitoring status of service components in
    10. the "Cluster Management". If any service is not
    11. ready, please wait patiently until all components
    12. are up and running.
    13. 2. Please change the default password after login.
    14. #####################################################
    15. https://kubesphere.io 2020-xx-xx xx:xx:xx
  4. Access the KubeSphere console using <NodeIP>:30880 with the default account and password (admin/P@88w0rd).

Add Additional Ports

As the Kubernetes cluster is set up on Azure instances directly, the load balancer is not integrated with Kubernetes Services. However, you can still manually map the NodePort to the load balancer. There are 2 steps required.

  1. Create a new Load Balance Rule in the load balancer. Load Balancer
  2. Create an Inbound Security rule to allow Internet access in the Network Security Group. Firewall

Feedback

Was this page Helpful?

Yes No