TiUP Common Operations

This document describes the following common operations when you operate and maintain a TiDB cluster using TiUP.

  • View the cluster list
  • Start the cluster
  • View the cluster status
  • Modify the configuration
  • Stop the cluster
  • Destroy the cluster

View the cluster list

You can manage multiple TiDB clusters using the TiUP cluster component. When a TiDB cluster is deployed, the cluster appears in the TiUP cluster list.

To view the list, run the following command:

  1. tiup cluster list

Start the cluster

The components in the TiDB cluster are started in the following order:

PD > TiKV > Pump > TiDB > TiFlash > Drainer > TiCDC > Prometheus > Grafana > Alertmanager

To start the cluster, run the following command:

  1. tiup cluster start ${cluster-name}

Maintain TiDB Using TiUP - 图1

Note

Replace ${cluster-name} with the name of your cluster. If you forget the cluster name, check it by running tiup cluster list.

You can start only some of the components by adding the -R or -N parameters in the command. For example:

  • This command starts only the PD component:

    1. tiup cluster start ${cluster-name} -R pd
  • This command starts only the PD components on the 1.2.3.4 and 1.2.3.5 hosts:

    1. tiup cluster start ${cluster-name} -N 1.2.3.4:2379,1.2.3.5:2379

Maintain TiDB Using TiUP - 图2

Note

If you start the specified component by using the -R or -N parameters, make sure the starting order is correct. For example, start the PD component before the TiKV component. Otherwise, the start might fail.

View the cluster status

After starting the cluster, check the status of each component to ensure that they work normally. TiUP provides the display command, so you do not have to log in to every machine to view the component status.

  1. tiup cluster display ${cluster-name}

Modify the configuration

When the cluster is in operation, if you need to modify the parameters of a component, run the edit-config command. The detailed steps are as follows:

  1. Open the configuration file of the cluster in the editing mode:

    1. tiup cluster edit-config ${cluster-name}
  2. Configure the parameters:

    • If the configuration is globally effective for a component, edit server_configs:

      1. server_configs:
      2. tidb:
      3. log.slow-threshold: 300
    • If the configuration takes effect on a specific node, edit the configuration in config of the node:

      1. tidb_servers:
      2. - host: 10.0.1.11
      3. port: 4000
      4. config:
      5. log.slow-threshold: 300

    For the parameter format, see the TiUP parameter template.

    Use . to represent the hierarchy of the configuration items.

    For more information on the configuration parameters of components, refer to TiDB config.toml.example, TiKV config.toml.example, and PD config.toml.example.

  3. Rolling update the configuration and restart the corresponding components by running the reload command:

    1. tiup cluster reload ${cluster-name} [-N <nodes>] [-R <roles>]

Example

If you want to set the transaction size limit parameter (txn-total-size-limit in the performance module) to 1G in tidb-server, edit the configuration as follows:

  1. server_configs:
  2. tidb:
  3. performance.txn-total-size-limit: 1073741824

Then, run the tiup cluster reload ${cluster-name} -R tidb command to rolling restart the TiDB component.

Replace with a hotfix package

For normal upgrade, see Upgrade TiDB Using TiUP. But in some scenarios, such as debugging, you might need to replace the currently running component with a temporary package. To achieve this, use the patch command:

  1. tiup cluster patch --help
  1. Replace the remote package with a specified package and restart the service
  2. Usage:
  3. cluster patch <cluster-name> <package-path> [flags]
  4. Flags:
  5. -h, --help help for patch
  6. -N, --node strings Specify the nodes
  7. --overwrite Use this package in the future scale-out operations
  8. -R, --role strings Specify the role
  9. --transfer-timeout int Timeout in seconds when transferring PD and TiKV store leaders (default 600)
  10. Global Flags:
  11. --native-ssh Use the system's native SSH client
  12. --wait-timeout int Timeout of waiting the operation
  13. --ssh-timeout int Timeout in seconds to connect host via SSH, ignored for operations that don't need an SSH connection. (default 5)
  14. -y, --yes Skip all confirmations and assumes 'yes'

If a TiDB hotfix package is in /tmp/tidb-hotfix.tar.gz and you want to replace all the TiDB packages in the cluster, run the following command:

  1. tiup cluster patch test-cluster /tmp/tidb-hotfix.tar.gz -R tidb

You can also replace only one TiDB package in the cluster:

  1. tiup cluster patch test-cluster /tmp/tidb-hotfix.tar.gz -N 172.16.4.5:4000

Rename the cluster

After deploying and starting the cluster, you can rename the cluster using the tiup cluster rename command:

  1. tiup cluster rename ${cluster-name} ${new-name}

Maintain TiDB Using TiUP - 图3

Note

  • The operation of renaming a cluster restarts the monitoring system (Prometheus and Grafana).
  • After a cluster is renamed, some panels with the old cluster name might remain on Grafana. You need to delete them manually.

Stop the cluster

The components in the TiDB cluster are stopped in the following order (The monitoring component is also stopped):

Alertmanager > Grafana > Prometheus > TiCDC > Drainer > TiFlash > TiDB > Pump > TiKV > PD

To stop the cluster, run the following command:

  1. tiup cluster stop ${cluster-name}

Similar to the start command, the stop command supports stopping some of the components by adding the -R or -N parameters. For example:

  • This command stops only the TiDB component:

    1. tiup cluster stop ${cluster-name} -R tidb
  • This command stops only the TiDB components on the 1.2.3.4 and 1.2.3.5 hosts:

    1. tiup cluster stop ${cluster-name} -N 1.2.3.4:4000,1.2.3.5:4000

Clean up cluster data

The operation of cleaning up cluster data stops all the services and cleans up the data directory or/and log directory. The operation cannot be reverted, so proceed with caution.

  • Clean up the data of all services in the cluster, but keep the logs:

    1. tiup cluster clean ${cluster-name} --data
  • Clean up the logs of all services in the cluster, but keep the data:

    1. tiup cluster clean ${cluster-name} --log
  • Clean up the data and logs of all services in the cluster:

    1. tiup cluster clean ${cluster-name} --all
  • Clean up the logs and data of all services except Prometheus:

    1. tiup cluster clean ${cluster-name} --all --ignore-role prometheus
  • Clean up the logs and data of all services except the 172.16.13.11:9000 instance:

    1. tiup cluster clean ${cluster-name} --all --ignore-node 172.16.13.11:9000
  • Clean up the logs and data of all services except the 172.16.13.12 node:

    1. tiup cluster clean ${cluster-name} --all --ignore-node 172.16.13.12

Destroy the cluster

The destroy operation stops the services and clears the data directory and deployment directory. The operation cannot be reverted, so proceed with caution.

  1. tiup cluster destroy ${cluster-name}