Deploy and Maintain an Online TiDB Cluster Using TiUP

This document focuses on how to use the TiUP cluster component. For the complete steps of online deployment, refer to Deploy a TiDB Cluster Using TiUP.

Similar to the TiUP playground component used for a local test deployment, the TiUP cluster component quickly deploys TiDB for production environment. Compared with playground, the cluster component provides more powerful production cluster management features, including upgrading, scaling, and even operation and auditing.

For the help information of the cluster component, run the following command:

  1. tiup cluster
  1. Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.11.3/cluster
  2. Deploy a TiDB cluster for production
  3. Usage:
  4. tiup cluster [command]
  5. Available Commands:
  6. check Precheck a cluster
  7. deploy Deploy a cluster for production
  8. start Start a TiDB cluster
  9. stop Stop a TiDB cluster
  10. restart Restart a TiDB cluster
  11. scale-in Scale in a TiDB cluster
  12. scale-out Scale out a TiDB cluster
  13. destroy Destroy a specified cluster
  14. clean (Experimental) Clean up a specified cluster
  15. upgrade Upgrade a specified TiDB cluster
  16. display Display information of a TiDB cluster
  17. list List all clusters
  18. audit Show audit log of cluster operation
  19. import Import an existing TiDB cluster from TiDB-Ansible
  20. edit-config Edit TiDB cluster config
  21. reload Reload a TiDB cluster's config and restart if needed
  22. patch Replace the remote package with a specified package and restart the service
  23. help Help about any command
  24. Flags:
  25. -c, --concurrency int Maximum number of concurrent tasks allowed (defaults to `5`)
  26. --format string (EXPERIMENTAL) The format of output, available values are [default, json] (default "default")
  27. -h, --help help for tiup
  28. --ssh string (Experimental) The executor type. Optional values are 'builtin', 'system', and 'none'.
  29. --ssh-timeout uint Timeout in seconds to connect a host via SSH. Operations that don't need an SSH connection are ignored. (default 5)
  30. -v, --version TiUP version
  31. --wait-timeout uint Timeout in seconds to wait for an operation to complete. Inapplicable operations are ignored. (defaults to `120`)
  32. -y, --yes Skip all confirmations and assumes 'yes'

Deploy the cluster

To deploy the cluster, run the tiup cluster deploy command. The usage of the command is as follows:

  1. tiup cluster deploy <cluster-name> <version> <topology.yaml> [flags]

This command requires you to provide the cluster name, the TiDB cluster version (such as v7.1.5), and a topology file of the cluster.

To write a topology file, refer to the example. The following file is an example of the simplest topology:

tiup-cluster - 图1

Note

The topology file used by the TiUP cluster component for deployment and scaling is written using yaml syntax, so make sure that the indentation is correct.

  1. ---
  2. pd_servers:
  3. - host: 172.16.5.134
  4. name: pd-134
  5. - host: 172.16.5.139
  6. name: pd-139
  7. - host: 172.16.5.140
  8. name: pd-140
  9. tidb_servers:
  10. - host: 172.16.5.134
  11. - host: 172.16.5.139
  12. - host: 172.16.5.140
  13. tikv_servers:
  14. - host: 172.16.5.134
  15. - host: 172.16.5.139
  16. - host: 172.16.5.140
  17. tiflash_servers:
  18. - host: 172.16.5.141
  19. - host: 172.16.5.142
  20. - host: 172.16.5.143
  21. grafana_servers:
  22. - host: 172.16.5.134
  23. monitoring_servers:
  24. - host: 172.16.5.134

By default, TiUP is deployed as the binary files running on the amd64 architecture. If the target machine is the arm64 architecture, you can configure it in the topology file:

  1. global:
  2. arch: "arm64" # Configures all machines to use the binary files of the arm64 architecture by default
  3. tidb_servers:
  4. - host: 172.16.5.134
  5. arch: "amd64" # Configures this machine to use the binary files of the amd64 architecture
  6. - host: 172.16.5.139
  7. arch: "arm64" # Configures this machine to use the binary files of the arm64 architecture
  8. - host: 172.16.5.140 # Machines that are not configured with the arch field use the default value in the global field, which is arm64 in this case.
  9. ...

Save the file as /tmp/topology.yaml. If you want to use TiDB v7.1.5 and your cluster name is prod-cluster, run the following command:

  1. tiup cluster deploy -p prod-cluster v7.1.5 /tmp/topology.yaml

During the execution, TiUP asks you to confirm your topology again and requires the root password of the target machine (the -p flag means inputting password):

  1. Please confirm your topology:
  2. TiDB Cluster: prod-cluster
  3. TiDB Version: v7.1.5
  4. Type Host Ports OS/Arch Directories
  5. ---- ---- ----- ------- -----------
  6. pd 172.16.5.134 2379/2380 linux/x86_64 deploy/pd-2379,data/pd-2379
  7. pd 172.16.5.139 2379/2380 linux/x86_64 deploy/pd-2379,data/pd-2379
  8. pd 172.16.5.140 2379/2380 linux/x86_64 deploy/pd-2379,data/pd-2379
  9. tikv 172.16.5.134 20160/20180 linux/x86_64 deploy/tikv-20160,data/tikv-20160
  10. tikv 172.16.5.139 20160/20180 linux/x86_64 deploy/tikv-20160,data/tikv-20160
  11. tikv 172.16.5.140 20160/20180 linux/x86_64 deploy/tikv-20160,data/tikv-20160
  12. tidb 172.16.5.134 4000/10080 linux/x86_64 deploy/tidb-4000
  13. tidb 172.16.5.139 4000/10080 linux/x86_64 deploy/tidb-4000
  14. tidb 172.16.5.140 4000/10080 linux/x86_64 deploy/tidb-4000
  15. tiflash 172.16.5.141 9000/8123/3930/20170/20292/8234 linux/x86_64 deploy/tiflash-9000,data/tiflash-9000
  16. tiflash 172.16.5.142 9000/8123/3930/20170/20292/8234 linux/x86_64 deploy/tiflash-9000,data/tiflash-9000
  17. tiflash 172.16.5.143 9000/8123/3930/20170/20292/8234 linux/x86_64 deploy/tiflash-9000,data/tiflash-9000
  18. prometheus 172.16.5.134 9090 deploy/prometheus-9090,data/prometheus-9090
  19. grafana 172.16.5.134 3000 deploy/grafana-3000
  20. Attention:
  21. 1. If the topology is not what you expected, check your yaml file.
  22. 2. Please confirm there is no port/directory conflicts in same host.
  23. Do you want to continue? [y/N]:

After you enter the password, TiUP cluster downloads the required components and deploy them on the corresponding machines. When you see the following message, the deployment is successful:

  1. Deployed cluster `prod-cluster` successfully

View the cluster list

After the cluster is successfully deployed, view the cluster list by running the following command:

  1. tiup cluster list
  1. Starting /root/.tiup/components/cluster/v1.11.3/cluster list
  2. Name User Version Path PrivateKey
  3. ---- ---- ------- ---- ----------
  4. prod-cluster tidb v7.1.5 /root/.tiup/storage/cluster/clusters/prod-cluster /root/.tiup/storage/cluster/clusters/prod-cluster/ssh/id_rsa

Start the cluster

After the cluster is successfully deployed, start the cluster by running the following command:

  1. tiup cluster start prod-cluster

If you forget the name of your cluster, view the cluster list by running tiup cluster list.

TiUP uses systemd to start a daemon process. If the process terminates unexpectedly, it will be pulled up after 15 seconds.

Check the cluster status

TiUP provides the tiup cluster display command to view the status of each component in the cluster. With this command, you don’t have to log in to each machine to see the component status. The usage of the command is as follows:

  1. tiup cluster display prod-cluster
  1. Starting /root/.tiup/components/cluster/v1.11.3/cluster display prod-cluster
  2. TiDB Cluster: prod-cluster
  3. TiDB Version: v7.1.5
  4. ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
  5. -- ---- ---- ----- ------- ------ -------- ----------
  6. 172.16.5.134:3000 grafana 172.16.5.134 3000 linux/x86_64 Up - deploy/grafana-3000
  7. 172.16.5.134:2379 pd 172.16.5.134 2379/2380 linux/x86_64 Up|L data/pd-2379 deploy/pd-2379
  8. 172.16.5.139:2379 pd 172.16.5.139 2379/2380 linux/x86_64 Up|UI data/pd-2379 deploy/pd-2379
  9. 172.16.5.140:2379 pd 172.16.5.140 2379/2380 linux/x86_64 Up data/pd-2379 deploy/pd-2379
  10. 172.16.5.134:9090 prometheus 172.16.5.134 9090 linux/x86_64 Up data/prometheus-9090 deploy/prometheus-9090
  11. 172.16.5.134:4000 tidb 172.16.5.134 4000/10080 linux/x86_64 Up - deploy/tidb-4000
  12. 172.16.5.139:4000 tidb 172.16.5.139 4000/10080 linux/x86_64 Up - deploy/tidb-4000
  13. 172.16.5.140:4000 tidb 172.16.5.140 4000/10080 linux/x86_64 Up - deploy/tidb-4000
  14. 172.16.5.141:9000 tiflash 172.16.5.141 9000/8123/3930/20170/20292/8234 linux/x86_64 Up data/tiflash-9000 deploy/tiflash-9000
  15. 172.16.5.142:9000 tiflash 172.16.5.142 9000/8123/3930/20170/20292/8234 linux/x86_64 Up data/tiflash-9000 deploy/tiflash-9000
  16. 172.16.5.143:9000 tiflash 172.16.5.143 9000/8123/3930/20170/20292/8234 linux/x86_64 Up data/tiflash-9000 deploy/tiflash-9000
  17. 172.16.5.134:20160 tikv 172.16.5.134 20160/20180 linux/x86_64 Up data/tikv-20160 deploy/tikv-20160
  18. 172.16.5.139:20160 tikv 172.16.5.139 20160/20180 linux/x86_64 Up data/tikv-20160 deploy/tikv-20160
  19. 172.16.5.140:20160 tikv 172.16.5.140 20160/20180 linux/x86_64 Up data/tikv-20160 deploy/tikv-20160

The Status column uses Up or Down to indicate whether the service is running normally.

For the PD component, |L or |UI might be appended to Up or Down. |L indicates that the PD node is a Leader, and |UI indicates that TiDB Dashboard is running on the PD node.

Scale in a cluster

tiup-cluster - 图2

Note

This section describes only the syntax of the scale-in command. For detailed steps of online scaling, refer to Scale a TiDB Cluster Using TiUP.

Scaling in a cluster means making some node(s) offline. This operation removes the specific node(s) from the cluster and deletes the remaining files.

Because the offline process of the TiKV, TiFlash, and TiDB Binlog components is asynchronous (which requires removing the node through API), and the process takes a long time (which requires continuous observation on whether the node is successfully taken offline), special treatment is given to the TiKV, TiFlash, and TiDB Binlog components.

  • For TiKV, TiFlash, and Binlog:

    • TiUP cluster takes the node offline through API and directly exits without waiting for the process to be completed.

    • Afterwards, when a command related to the cluster operation is executed, TiUP cluster examines whether there is a TiKV, TiFlash, or Binlog node that has been taken offline. If not, TiUP cluster continues with the specified operation; If there is, TiUP cluster takes the following steps:

      1. Stop the service of the node that has been taken offline.
      2. Clean up the data files related to the node.
      3. Remove the node from the cluster topology.
  • For other components:

    • When taking the PD component down, TiUP cluster quickly deletes the specified node from the cluster through API, stops the service of the specified PD node, and deletes the related data files.
    • When taking other components down, TiUP cluster directly stops the node service and deletes the related data files.

The basic usage of the scale-in command:

  1. tiup cluster scale-in <cluster-name> -N <node-id>

To use this command, you need to specify at least two flags: the cluster name and the node ID. The node ID can be obtained by using the tiup cluster display command in the previous section.

For example, to make the TiKV node on 172.16.5.140 offline, run the following command:

  1. tiup cluster scale-in prod-cluster -N 172.16.5.140:20160

By running tiup cluster display, you can see that the TiKV node is marked Offline:

  1. tiup cluster display prod-cluster
  1. Starting /root/.tiup/components/cluster/v1.11.3/cluster display prod-cluster
  2. TiDB Cluster: prod-cluster
  3. TiDB Version: v7.1.5
  4. ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
  5. -- ---- ---- ----- ------- ------ -------- ----------
  6. 172.16.5.134:3000 grafana 172.16.5.134 3000 linux/x86_64 Up - deploy/grafana-3000
  7. 172.16.5.134:2379 pd 172.16.5.134 2379/2380 linux/x86_64 Up|L data/pd-2379 deploy/pd-2379
  8. 172.16.5.139:2379 pd 172.16.5.139 2379/2380 linux/x86_64 Up|UI data/pd-2379 deploy/pd-2379
  9. 172.16.5.140:2379 pd 172.16.5.140 2379/2380 linux/x86_64 Up data/pd-2379 deploy/pd-2379
  10. 172.16.5.134:9090 prometheus 172.16.5.134 9090 linux/x86_64 Up data/prometheus-9090 deploy/prometheus-9090
  11. 172.16.5.134:4000 tidb 172.16.5.134 4000/10080 linux/x86_64 Up - deploy/tidb-4000
  12. 172.16.5.139:4000 tidb 172.16.5.139 4000/10080 linux/x86_64 Up - deploy/tidb-4000
  13. 172.16.5.140:4000 tidb 172.16.5.140 4000/10080 linux/x86_64 Up - deploy/tidb-4000
  14. 172.16.5.141:9000 tiflash 172.16.5.141 9000/8123/3930/20170/20292/8234 linux/x86_64 Up data/tiflash-9000 deploy/tiflash-9000
  15. 172.16.5.142:9000 tiflash 172.16.5.142 9000/8123/3930/20170/20292/8234 linux/x86_64 Up data/tiflash-9000 deploy/tiflash-9000
  16. 172.16.5.143:9000 tiflash 172.16.5.143 9000/8123/3930/20170/20292/8234 linux/x86_64 Up data/tiflash-9000 deploy/tiflash-9000
  17. 172.16.5.134:20160 tikv 172.16.5.134 20160/20180 linux/x86_64 Up data/tikv-20160 deploy/tikv-20160
  18. 172.16.5.139:20160 tikv 172.16.5.139 20160/20180 linux/x86_64 Up data/tikv-20160 deploy/tikv-20160
  19. 172.16.5.140:20160 tikv 172.16.5.140 20160/20180 linux/x86_64 Offline data/tikv-20160 deploy/tikv-20160

After PD schedules the data on the node to other TiKV nodes, this node will be deleted automatically.

Scale out a cluster

tiup-cluster - 图3

Note

This section describes only the syntax of the scale-out command. For detailed steps of online scaling, refer to Scale a TiDB Cluster Using TiUP.

The scale-out operation has an inner logic similar to that of deployment: the TiUP cluster component firstly ensures the SSH connection of the node, creates the required directories on the target node, then executes the deployment operation, and starts the node service.

When you scale out PD, the node is added to the cluster by join, and the configurations of services associated with PD are updated. When you scale out other services, the service is started directly and added to the cluster.

All services conduct correctness validation when they are scaled out. The validation results show whether the scaling-out is successful.

To add a TiKV node and a PD node in the tidb-test cluster, take the following steps:

  1. Create a scale.yaml file, and add IPs of the new TiKV and PD nodes:

    tiup-cluster - 图4

    Note

    You need to create a topology file, which includes only the description of the new nodes, not the existing nodes.

    1. ---
    2. pd_servers:
    3. - host: 172.16.5.140
    4. tikv_servers:
    5. - host: 172.16.5.140
  2. Perform the scale-out operation. TiUP cluster adds the corresponding nodes to the cluster according to the port, directory, and other information described in scale.yaml.

    1. tiup cluster scale-out tidb-test scale.yaml

    After the command is executed, you can check the status of the scaled-out cluster by running tiup cluster display tidb-test.

Rolling upgrade

tiup-cluster - 图5

Note

This section describes only the syntax of the upgrade command. For detailed steps of online upgrade, refer to Upgrade TiDB Using TiUP.

The rolling upgrade feature leverages the distributed capabilities of TiDB. The upgrade process is made as transparent as possible to the application, and does not affect the business.

Before the upgrade, TiUP cluster checks whether the configuration file of each component is rational. If so, the components are upgraded node by node; if not, TiUP reports an error and exits. The operations vary with different nodes.

Operations for different nodes

  • Upgrade the PD node

    • First, upgrade non-Leader nodes.
    • After all the non-Leader nodes are upgraded, upgrade the Leader node.
      • The upgrade tool sends a command to PD that migrates Leader to an already upgraded node.
      • After the Leader role is switched to another node, upgrade the previous Leader node.
    • During the upgrade, if any unhealthy node is detected, the tool stops this upgrade operation and exits. You need to manually analyze the cause, fix the issue and run the upgrade again.
  • Upgrade the TiKV node

    • First, add a scheduling operation in PD that migrates the Region Leader of this TiKV node. This ensures that the upgrade process does not affect the business.
    • After the Leader is migrated, upgrade this TiKV node.
    • After the upgraded TiKV is started normally, remove the scheduling of the Leader.
  • Upgrade other services

    • Stop the service normally and update the node.

Upgrade command

The flags for the upgrade command is as follows:

  1. Usage:
  2. cluster upgrade <cluster-name> <version> [flags]
  3. Flags:
  4. --force Force upgrade won't transfer leader
  5. -h, --help help for upgrade
  6. --transfer-timeout int Timeout in seconds when transferring PD and TiKV store leaders (default 600)
  7. Global Flags:
  8. --ssh string (Experimental) The executor type. Optional values are 'builtin', 'system', and 'none'.
  9. --wait-timeout int Timeout of waiting the operation
  10. --ssh-timeout int Timeout in seconds to connect host via SSH, ignored for operations that don't need an SSH connection. (default 5)
  11. -y, --yes Skip all confirmations and assumes 'yes'

For example, the following command upgrades the cluster to v7.1.5:

  1. tiup cluster upgrade tidb-test v7.1.5

Update configuration

If you want to dynamically update the component configurations, the TiUP cluster component saves a current configuration for each cluster. To edit this configuration, execute the tiup cluster edit-config <cluster-name> command. For example:

  1. tiup cluster edit-config prod-cluster

TiUP cluster opens the configuration file in the vi editor. If you want to use other editors, use the EDITOR environment variable to customize the editor, such as export EDITOR=nano.

After editing the file, save the changes. To apply the new configuration to the cluster, execute the following command:

  1. tiup cluster reload prod-cluster

The command sends the configuration to the target machine and restarts the cluster to make the configuration take effect.

tiup-cluster - 图6

Note

For monitoring components, customize the configuration by executing the tiup cluster edit-config command to add a custom configuration path on the corresponding instance. For example:

  1. ---
  2. grafana_servers:
  3. - host: 172.16.5.134
  4. dashboard_dir: /path/to/local/dashboards/dir
  5. monitoring_servers:
  6. - host: 172.16.5.134
  7. rule_dir: /path/to/local/rules/dir
  8. alertmanager_servers:
  9. - host: 172.16.5.134
  10. config_file: /path/to/local/alertmanager.yml

The content and format requirements for files under the specified path are as follows:

  • The folder specified in the dashboard_dir field of grafana_servers must contain full *.json files.
  • The folder specified in the rule_dir field of monitoring_servers must contain full *.rules.yml files.
  • For the format of files specified in the config_file field of alertmanager_servers, refer to the Alertmanager configuration template.

When you execute tiup reload, TiUP first deletes all old configuration files in the target machine and then uploads the corresponding configuration from the control machine to the corresponding configuration directory of the target machine. Therefore, if you want to modify a particular configuration file, make sure that all configuration files (including the unmodified ones) are in the same directory. For example, to modify Grafana’s tidb.json file, you need to first copy all the *.json files from Grafana’s dashboards directory to your local directory. Otherwise, other JSON files will be missing from the target machine.

tiup-cluster - 图7

Note

If you have configured the dashboard_dir field of grafana_servers, after executing the tiup cluster rename command to rename the cluster, you need to complete the following operations:

  1. In the local dashboards directory, change the cluster name to the new cluster name.
  2. In the local dashboards directory, change datasource to the new cluster name, because datasource is named after the cluster name.
  3. Execute the tiup cluster reload -R grafana command.

Update component

For normal upgrade, you can use the upgrade command. But in some scenarios, such as debugging, you might need to replace the currently running component with a temporary package. To achieve this, use the patch command:

  1. tiup cluster patch --help
  1. Replace the remote package with a specified package and restart the service
  2. Usage:
  3. cluster patch <cluster-name> <package-path> [flags]
  4. Flags:
  5. -h, --help help for patch
  6. -N, --node strings Specify the nodes
  7. --offline Patch a stopped cluster
  8. --overwrite Use this package in the future scale-out operations
  9. -R, --role strings Specify the roles
  10. --transfer-timeout uint Timeout in seconds when transferring PD and TiKV store leaders, also for TiCDC drain one capture (default 600)
  11. Global Flags:
  12. -c, --concurrency int max number of parallel tasks allowed (default 5)
  13. --format string (EXPERIMENTAL) The format of output, available values are [default, json] (default "default")
  14. --ssh string (EXPERIMENTAL) The executor type: 'builtin', 'system', 'none'.
  15. --ssh-timeout uint Timeout in seconds to connect host via SSH, ignored for operations that don't need an SSH connection. (default 5)
  16. --wait-timeout uint Timeout in seconds to wait for an operation to complete, ignored for operations that don't fit. (default 120)
  17. -y, --yes Skip all confirmations and assumes 'yes'

If a TiDB hotfix package is in /tmp/tidb-hotfix.tar.gz and you want to replace all the TiDB packages in the cluster, run the following command:

  1. tiup cluster patch test-cluster /tmp/tidb-hotfix.tar.gz -R tidb

You can also replace only one TiDB package in the cluster:

  1. tiup cluster patch test-cluster /tmp/tidb-hotfix.tar.gz -N 172.16.4.5:4000

Import TiDB Ansible cluster

tiup-cluster - 图8

Note

Currently, TiUP cluster’s support for TiSpark is still experimental. It is not supported to import a TiDB cluster with TiSpark enabled.

Before TiUP is released, TiDB Ansible is often used to deploy TiDB clusters. To enable TiUP to take over the cluster deployed by TiDB Ansible, use the import command.

The usage of the import command is as follows:

  1. tiup cluster import --help
  1. Import an exist TiDB cluster from TiDB-Ansible
  2. Usage:
  3. cluster import [flags]
  4. Flags:
  5. -d, --dir string The path to TiDB-Ansible directory
  6. -h, --help help for import
  7. --inventory string The name of inventory file (default "inventory.ini")
  8. --no-backup Don't backup ansible dir, useful when there're multiple inventory files
  9. -r, --rename NAME Rename the imported cluster to NAME
  10. Global Flags:
  11. --ssh string (Experimental) The executor type. Optional values are 'builtin', 'system', and 'none'.
  12. --wait-timeout int Timeout of waiting the operation
  13. --ssh-timeout int Timeout in seconds to connect host via SSH, ignored for operations that don't need an SSH connection. (default 5)
  14. -y, --yes Skip all confirmations and assumes 'yes'

You can use either of the following commands to import a TiDB Ansible cluster:

  1. cd tidb-ansible
  2. tiup cluster import
  1. tiup cluster import --dir=/path/to/tidb-ansible

View the operation log

To view the operation log, use the audit command. The usage of the audit command is as follows:

  1. Usage:
  2. tiup cluster audit [audit-id] [flags]
  3. Flags:
  4. -h, --help help for audit

If the [audit-id] flag is not specified, the command shows a list of commands that have been executed. For example:

  1. tiup cluster audit
  1. Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.11.3/cluster audit
  2. ID Time Command
  3. -- ---- -------
  4. 4BLhr0 2024-04-26T23:55:09+08:00 /home/tidb/.tiup/components/cluster/v1.11.3/cluster deploy test v7.1.5 /tmp/topology.yaml
  5. 4BKWjF 2024-04-26T23:36:57+08:00 /home/tidb/.tiup/components/cluster/v1.11.3/cluster deploy test v7.1.5 /tmp/topology.yaml
  6. 4BKVwH 2024-04-26T23:02:08+08:00 /home/tidb/.tiup/components/cluster/v1.11.3/cluster deploy test v7.1.5 /tmp/topology.yaml
  7. 4BKKH1 2024-04-26T16:39:04+08:00 /home/tidb/.tiup/components/cluster/v1.11.3/cluster destroy test
  8. 4BKKDx 2024-04-26T16:36:57+08:00 /home/tidb/.tiup/components/cluster/v1.11.3/cluster deploy test v7.1.5 /tmp/topology.yaml

The first column is audit-id. To view the execution log of a certain command, pass the audit-id of a command as the flag as follows:

  1. tiup cluster audit 4BLhr0

Run commands on a host in the TiDB cluster

To run command on a host in the TiDB cluster, use the exec command. The usage of the exec command is as follows:

  1. Usage:
  2. cluster exec <cluster-name> [flags]
  3. Flags:
  4. --command string the command run on cluster host (default "ls")
  5. -h, --help help for exec
  6. -N, --node strings Only exec on host with specified nodes
  7. -R, --role strings Only exec on host with specified roles
  8. --sudo use root permissions (default false)
  9. Global Flags:
  10. --ssh-timeout int Timeout in seconds to connect host via SSH, ignored for operations that don't need an SSH connection. (default 5)
  11. -y, --yes Skip all confirmations and assumes 'yes'

For example, to execute ls /tmp on all TiDB nodes, run the following command:

  1. tiup cluster exec test-cluster --command='ls /tmp'

Cluster controllers

Before TiUP is released, you can control the cluster using tidb-ctl, tikv-ctl, pd-ctl, and other tools. To make the tools easier to download and use, TiUP integrates them into an all-in-one component, ctl.

  1. Usage:
  2. tiup ctl:v<CLUSTER_VERSION> {tidb/pd/tikv/binlog/etcd} [flags]
  3. Flags:
  4. -h, --help help for tiup

This command has a corresponding relationship with those of the previous tools:

  1. tidb-ctl [args] = tiup ctl tidb [args]
  2. pd-ctl [args] = tiup ctl pd [args]
  3. tikv-ctl [args] = tiup ctl tikv [args]
  4. binlogctl [args] = tiup ctl bindlog [args]
  5. etcdctl [args] = tiup ctl etcd [args]

For example, if you previously view the store by running pd-ctl -u http://127.0.0.1:2379 store, now you can run the following command in TiUP:

  1. tiup ctl:v<CLUSTER_VERSION> pd -u http://127.0.0.1:2379 store

Environment checks for target machines

You can use the check command to perform a series of checks on the environment of the target machine and output the check results. By executing the check command, you can find common unreasonable configurations or unsupported situations. The command flag list is as follows:

  1. Usage:
  2. tiup cluster check <topology.yml | cluster-name> [flags]
  3. Flags:
  4. --apply Try to fix failed checks
  5. --cluster Check existing cluster, the input is a cluster name.
  6. --enable-cpu Enable CPU thread count check
  7. --enable-disk Enable disk IO (fio) check
  8. --enable-mem Enable memory size check
  9. -h, --help help for check
  10. -i, --identity_file string The path of the SSH identity file. If specified, public key authentication will be used.
  11. -p, --password Use password of target hosts. If specified, password authentication will be used.
  12. --user string The user name to login via SSH. The user must has root (or sudo) privilege.

By default, this command is used to check the environment before deployment. By specifying the --cluster flag to switch the mode, you can also check the target machines of an existing cluster, for example:

  1. # check deployed servers before deployment
  2. tiup cluster check topology.yml --user tidb -p
  3. # check deployed servers of an existing cluster
  4. tiup cluster check <cluster-name> --cluster

The CPU thread count check, memory size check, and disk performance check are disabled by default. For the production environment, it is recommended that you enable the three checks and make sure they pass to obtain the best performance.

  • CPU: If the number of threads is greater than or equal to 16, the check is passed.
  • Memory: If the total size of physical memory is greater than or equal to 32 GB, the check is passed.
  • Disk: Execute fio test on the partitions of data_dir and record the results.

When running the checks, if the --apply flag is specified, the program automatically repairs the failed items. Automatic repair is limited to some items that can be adjusted by modifying the configuration or system parameters. Other unrepaired items need to be handled manually according to the actual situation.

Environment checks are not necessary for deploying a cluster. For the production environment, it is recommended to perform environment checks and pass all check items before deployment. If not all the check items are passed, the cluster might be deployed and run normally, but the best performance might not be obtained.

Use the system’s native SSH client to connect to cluster

All operations above performed on the cluster machine use the SSH client embedded in TiUP to connect to the cluster and execute commands. However, in some scenarios, you might also need to use the SSH client native to the control machine system to perform such cluster operations. For example:

  • To use a SSH plug-in for authentication
  • To use a customized SSH client

Then you can use the --ssh=system command-line flag to enable the system-native command-line tool:

  • Deploy a cluster: tiup cluster deploy <cluster-name> <version> <topo> --ssh=system. Fill in the name of your cluster for <cluster-name>, the TiDB version to be deployed (such as v7.1.5) for <version>, and the topology file for <topo>.
  • Start a cluster: tiup cluster start <cluster-name> --ssh=system
  • Upgrade a cluster: tiup cluster upgrade ... --ssh=system

You can add --ssh=system in all cluster operation commands above to use the system’s native SSH client.

To avoid adding such a flag in every command, you can use the TIUP_NATIVE_SSH system variable to specify whether to use the local SSH client:

  1. export TIUP_NATIVE_SSH=true
  2. # or
  3. export TIUP_NATIVE_SSH=1
  4. # or
  5. export TIUP_NATIVE_SSH=enable

If you specify this environment variable and --ssh at the same time, --ssh has higher priority.

tiup-cluster - 图9

Note

During the process of cluster deployment, if you need to use a password for connection (-p) or passphrase is configured in the key file, you must ensure that sshpass is installed on the control machine; otherwise, a timeout error is reported.

Migrate control machine and back up TiUP data

The TiUP data is stored in the .tiup directory in the user’s home directory. To migrate the control machine, you can take the following steps to copy the .tiup directory to the corresponding target machine:

  1. Execute tar czvf tiup.tar.gz .tiup in the home directory of the original machine.

  2. Copy tiup.tar.gz to the home directory of the target machine.

  3. Execute tar xzvf tiup.tar.gz in the home directory of the target machine.

  4. Add the .tiup directory to the PATH environment variable.

    If you use bash and you are a tidb user, you can add export PATH=/home/tidb/.tiup/bin:$PATH in ~/.bashrc and execute source ~/.bashrc. Then make corresponding adjustments according to the shell and the user you use.

tiup-cluster - 图10

Note

It is recommended that you back up the .tiup directory regularly to avoid the loss of TiUP data caused by abnormal conditions, such as disk damage of the control machine.

Back up and restore meta files for cluster deployment and O&M

If the meta files used for operation and maintenance (O&M) are lost, managing the cluster using TiUP will fail. It is recommended that you back up the meta files regularly by running the following command:

  1. tiup cluster meta backup ${cluster_name}

If the meta files are lost, you can restore them by running the following command:

  1. tiup cluster meta restore ${cluster_name} ${backup_file}

tiup-cluster - 图11

Note

The restore operation overwrites the current meta files. Therefore, it is recommended to restore the meta files only when they are lost.