Cluster management
kubeclipper supports full lifecycle management for Kubernetes clusters.
View Cluster operations
On the cluster details page, click the “Operation Log” tab to see the cluster operation records. Click the “ViewLog” button on the right side to inspect the detailed logs of all steps and nodes in the pop-up window. Click the step name on the left to inspect the detailed log of the execution steps.
During the execution of cluster operations, you can inspect real-time log updates to trace the operation execution. For operations that failed to execute, you can also locate error by red dot under the step name, and troubleshoot the cause of the operation failure.
Try again after failed task
If the task failed but you do not need to modify the task parameters after troubleshooting, you can click “Retry” on the right of the operation record to retry the task at the breakpoint.
Note: The retry operation is not universal. You need to determine the cause of the task failure by yourself.
After cluster operation (such as creation, restoration, and upgrade) failure, the cluster status may be displayed as “xx failed” and other operations cannot be performed. If the operation can not be retrayed successflly. You need to refer to the O&M document to manually rectify the cluster error, and click More > Cluster Status > Reset Status to reset the cluster to normal status.
Access Kubectl
The Kubernetes command-line tool, kubectl, allows you to run commands on Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, view logs, and more.
Click “More” > “Connect Terminal” in the cluster operation, and you can execute the kubectl commands in the cluster kuebectl pop-up window.
Cluster Settings
Edit
You can click More > Cluster Settings > Edit on the right of the cluster list to edit the cluster description, backup space, external access IP address, and cluster label information.
Save as template
You can click More > Cluster Settings > Save as Template on the right of the cluster list to save the cluster settings as a template and use it to creat new clusters with similar configurations.
CRI Registry
Docker and Containerd use dockerhub as the default registry. If you need to use other private registry (especially self-signed https registries or http registries), you need to configure CRI registry.
Click “More” > “Cluster Settings” > “CRI Registry” on the right of the cluster page. In the pop-up window configure the required private registry. You can select an existing registry on the platform or temporarily enter the address of a registry. For a self-signed https or http registry, it is recommended to add the registry information on the Cluster Management > Registry in advance.
Cluster node management
On the “Nodes list” page of the cluster detail page, you can view the list of nodes in the cluster, specification, status and role information of the nodes.
Add cluster node
When the cluster load is high, you can add nodes to the cluster to expand capacity. Adding nodes operation does not affect the running services.
On the cluster detail page, under the Node List tab, click the “AddNode” button on the left, select the available nodes in the pop-up window, set the node labels, and click the “OK” button. The current version only supports adding worker nodes.
Remove cluster node
On the cluster detail page, under the Node List tab, you can remove a node by clicking the “Remove” button on the right of the node. The current version only supports removing worker nodes.
Note: To remove cluster nodes, you need to pay attention to security issues in production to avoid application interruptions.
Cluster Backup and Recovery
The backup of kubernetes cluster by KubeClipper backs up the data of ETCD database, and kubernetes resource object, such as namespaces, deployments, configMaps. The files and data generated by the resource itself are not backed up. For example, the data and files generated by the mysql pod will not be backed up. Similarly, the files under the PV object are not backed up, only the pv object is backed up. The backup function provided by KubeClipper is hot backup, which does not affect cluster usage. While KubeClipper strongly disapproves of backing up during the “busy period” of the cluster.
Create a backup space
Before performing a backup operation, you need to set a backup space for the cluster, that is, set the storage location of the backup files. The storage type of the backup space can be FS storage or S3 storage . Tack the node local storage , NFS storage and MINIO storage as examples:
Create a storage directory. Connect to the cluster master node terminal ( refer to Connect Nodes Terminal ) and use the mkdir command to create the “/root/backup” directory in the master node.
Create a backup space. Click “Cluster Management” > “backup space” to enter the backup space list page, click the “Create” button in the upper left corner, in the Create pop-up window, enter “Backup Space Name”, such as “local”, select “StorageType” as “FS”, fill in “backupRootDir” as “/root/backup”.
Set up the cluster backup space. When creating a cluster, select “backup space” as “local” on the “Cluster Config” page, or edit an existing cluster and select “local” as the “backup space”.
Note: Using a local node to store backup files does not require the introduction of external storage. The disadvantage is that if the local node is damaged, the backup files will also be lost, so it is strongly disapproved in a production environment .
- NFS:
Prepare NFS storage. Prepare an NFS service and create a directory on the NFS server to store backup files, such as “/data/kubeclipper/cluster-backups”.
Mount the storage directory. Connect the cluster master node terminal ( refer to Connect node Terminal ), use the mkdir command to create the “/data/kubeclipper/cluster-backups” directory in each master node, and mount it to the /data/kubeclipper/cluster-backups directory of the NFS server.
Command example:
mount -t nfs {NFS\_IP}:/data/kubeclipper/cluster-backups /opt/kubeclipper/cluster-backups -o proto = tcp -o nolock
Create a backup space. Click “Cluster Management” > “Backup Space” to enter the backup space list page, click the “Create” button in the upper left corner, in the Create pop-up window, enter “Backup Space Name”, such as “nfs”, select “StorageType” as “FS”, fill in “backupRootDir” as “/opt/kubeclipper/cluster-backups”.
Set up the cluster backup space. When creating a cluster, select “backup space” as “nfs” on the “Cluster Config” page, or edit an existing cluster and select “nfs” as the “backup space”.
- MINIO:
Prepare MINIO storage. Build MINIO services, refer to the official website https://docs.min.io/docs/minio-quickstart-guide.html for the deployment process, or use existing MINIO services.
Create a backup space. Click “Cluster Management” > “Backup Space” to enter the backup space list page, click the “Create” button in the upper left corner, in the Create window, enter “Backup Space Name”, such as “minio”, select “Storage Type” as “S3”, fill in “bucket name”, such as “kubeclipper-backups”, the bucket will be automatically created by kubeclipper, fill in the IP and port number of the MINIO storage service in the first step in “Endpoint”, fill in the service username and password, click the “OK” button.
Set up the cluster backup space. When creating a cluster, select “backup space” as “minio” on the “Cluster Config” page, or edit an existing cluster and select “minio” as the “backup space”.
You can view the list and details of all backup spaces on the “Cluster Management”>”backup spaces” page and perform the following operations:
Edit: Edit the backup space description, and the username/password of the S3 type backup space.
Delete: Delete the backup space. If there are backup files under the backup space, deletion is not allowed.
Cluster backup
You can back up your cluster ETCD data by clicking the “More” > “Backup and recovery” > “Backup Cluster” button in the cluster operation.
You can view all backup files of the cluster under the Backup tab on the cluster detail page, and you can perform the following operations for backups:
Edit: Edit the backup description.
Restore: Performs a cluster restore operation to restore the cluster to the specified backup state.
Delete: Deletes the backup file.
Scheduled backup
You can also create a scheduled backup task for the cluster, click the “More” > “Backup and recovery” > “Scheduled Backup” button in the cluster operation, in the Scheduled Backup pop-up window, enter the scheduled backup name, execution type ( repeat / onlyonce) and execution time, and set the number of valid backups for a repeat scheduled backups, and click the “OK” button.
kubeClipper will perform backup tasks for the cluster at the execution time you set, and the backup file will be automatically named “Cluster Name - Scheduled Backup Name - Random Code”. For repeat scheduled backups, when the number of backup files exceeds the number of valid backup files, kubeClipper will automatically delete the earlier backup files.
After the scheduled backup task is added, you can view the scheduled backup task information on the “Scheduled Backup” tab of the cluster detail page, and you can also view the backup files generated by the scheduled backup on the “Backup” tab.
For scheduled backup tasks, you can also perform the following operations:
Edit: Edit the execution time of the scheduled backup task and the number of valid backups for repeat scheduled backups.
Enable/Disable: Disabled scheduled backup tasks are temporarily stopped.
Delete: Delete a scheduled backup task.
Restore Cluster
If you perform restore operation while the cluster is running, KubeClipper will perform overlay recovery on the cluster, that is, the ETCD data in the backup file, overwriting the existing data .
You can click the “Restore” button on the right side of the backup under the Backup tab of the cluster detail page; or click the “More” > “Backup and recovery” > “Restore Cluster” button in the cluster operation, and select the backup to be restored in the Restore Cluster pop-up window. The current cluster can be restored to the specified backup state.
Note: After the kubernetes version of the cluster is upgraded, it will no longer be possible to restore the cluster to the pre-upgrade backup version.
Cluster Status
Cluster version upgrade
If the cluster version does not meet the requirements, you can upgrade the kubernetes version of the cluster. Similar to creating a cluster, you need to prepare the configuration package required and the kubernetes image of the target version, upload them to the specified location. For details, refer to Prepare to Create a Cluster.
Click the “More” > “Cluster status” > “Cluster Upgrade” button of the cluster operation. In the cluster upgrade pop-up window, select the installation method and registry, and select the target upgrade version. The installation method and the configuration of the kubernetes version are the same as those of creating a cluster. For details, please refer to Cluster Configuration Guide.
Cluster upgrades can be performed across minor versions, but upgrades skipped over later versions are not supported. For example, you can upgrade from v1.20.2 to v1.20.13, or from v1.20.x to v1.21.x, but not from v1.20.x to v1.22.x. For version 1.23.x, upgrading to version 1.24.x is not currently supported.
The cluster upgrade operation may take a long time. You can view the operation log on the cluster detail page to track the cluster upgrade status.
Delete cluster
You can click “More” > “Cluster Status” > “Delete Cluster” on the right of the cluster list to delete the cluster.
Note that after the cluster is deleted, it cannot be restored. You must perform this operation with great caution. If the cluster is connected to an external storage device, the volumes in the storage class whose reclaim policy is “Retain” will be retained. You can access them in other ways or manually delete them. Volumes in the storage class whose reclaim policy is “Delete” will be automatically deleted when the cluster is deleted.
Reset the status
After cluster operation (such as creation, restoration, and upgrade) failure, the cluster status may be displayed as “xx failed” and other operations cannot be performed. If the operation can not be retrayed successflly. You need to refer to the O&M document to manually rectify the cluster error, and click More > Cluster Status > Reset Status to reset the cluster to normal status.
Cluster plugin management
In addition to installing plugins when creating a cluster, you can also install plugins for a running cluster. Taking the installation of storage plugins as an example, click the “More” > “plugin management”>”Add Storage” button in the cluster operation to enter the Add Storage page. You can install NFS plugins for the cluster. The installation configuration is the same as the configuration in cluster creation.
For installed plugins, you can view the plugin information on the cluster detail page, and perform the following operations:
- Save as Template: Save the plugin information as a template for use by other clusters
- Remove plug-in: Uninstalls the cluster plug-in.
Cluster certificate management
Update cluster certificate
The default validity period of the kubernetes cluster certificate is one year. You can view the certificate expiration time in the basic information on the cluster detail page. You can also view the certificate expiration notification in the cluster list the day before the certificate expires. To update the cluster certificate, click “More” > “Cluster Certificate” > “Update Cluster Certificate” in the cluster operation to update all cluster certificates.
View kubeconfig file
You can click “More” > “Cluster Certificate” > “View KubeConfig File” button in the cluster operation to view the cluster kubeconfig file, or click “Download” button in the pop-up window to download the kubeconfig file.