How to Test TiDB Using Sysbench

It is recommended to use Sysbench 1.0 or later, which can be downloaded here.

Test plan

TiDB configuration

Higher log level means fewer logs to be printed and thus positively influences TiDB performance. Specifically, you can add the following command in the TiUP configuration file:

  1. server_configs:
  2. tidb:
  3. log.level: "error"

It is also recommended to make sure tidb_enable_prepared_plan_cache is enabled and that you allow sysbench to use prepared statements by using --db-ps-mode=auto. See the SQL Prepared Execution Plan Cache for documentation about what the SQL plan cache does and how to monitor it.

Test TiDB Using Sysbench - 图1

Note

In different versions of Sysbench, the default value of db-ps-mode might be different. It is recommended to explicitly specify it in the command.

TiKV configuration

Higher log level also means better performance for TiKV.

There are multiple Column Families on TiKV cluster which are mainly used to store different types of data, including Default CF, Write CF, and Lock CF. For the Sysbench test, you only need to focus on Default CF and Write CF. The Column Family that is used to import data has a constant proportion among TiDB clusters:

Default CF : Write CF = 4 : 1

Configuring the block cache of RocksDB on TiKV should be based on the machine’s memory size, in order to make full use of the memory. To deploy a TiKV cluster on a 40GB virtual machine, it is recommended to configure the block cache as follows:

  1. server_configs:
  2. tikv:
  3. log-level: "error"
  4. rocksdb.defaultcf.block-cache-size: "24GB"
  5. rocksdb.writecf.block-cache-size: "6GB"

You can also configure TiKV to share block cache:

  1. server_configs:
  2. tikv:
  3. storage.block-cache.capacity: "30GB"

For more detailed information on TiKV performance tuning, see Tune TiKV Performance.

Test process

Test TiDB Using Sysbench - 图2

Note

The test in this document was performed without load balancing tools such as HAproxy. We run the Sysbench test on individual TiDB node and added the results up. The load balancing tools and the parameters of different versions might also impact the performance.

Sysbench configuration

This is an example of the Sysbench configuration file:

  1. mysql-host={TIDB_HOST}
  2. mysql-port=4000
  3. mysql-user=root
  4. mysql-password=password
  5. mysql-db=sbtest
  6. time=600
  7. threads={8, 16, 32, 64, 128, 256}
  8. report-interval=10
  9. db-driver=mysql

The above parameters can be adjusted according to actual needs. Among them, TIDB_HOST is the IP address of the TiDB server (because we cannot include multiple addresses in the configuration file), threads is the number of concurrent connections in the test, which can be adjusted in “8, 16, 32, 64, 128, 256”. When importing data, it is recommended to set threads = 8 or 16. After adjusting threads, save the file named config.

See the following as a sample config file:

  1. mysql-host=172.16.30.33
  2. mysql-port=4000
  3. mysql-user=root
  4. mysql-password=password
  5. mysql-db=sbtest
  6. time=600
  7. threads=16
  8. report-interval=10
  9. db-driver=mysql

Data import

Test TiDB Using Sysbench - 图3

Note

If you enable the optimistic transaction model (TiDB uses the pessimistic transaction mode by default), TiDB rolls back transactions when a concurrency conflict is found. Setting tidb_disable_txn_auto_retry to off turns on the automatic retry mechanism after meeting a transaction conflict, which can prevent Sysbench from quitting because of the transaction conflict error.

Before importing the data, it is necessary to make some settings to TiDB. Execute the following command in MySQL client:

  1. set global tidb_disable_txn_auto_retry = off;

Then exit the client.

Restart MySQL client and execute the following SQL statement to create a database sbtest:

  1. create database sbtest;

Adjust the order in which Sysbench scripts create indexes. Sysbench imports data in the order of “Build Table -> Insert Data -> Create Index”, which takes more time for TiDB to import data. Users can adjust the order to speed up the import of data. Suppose that you use the Sysbench version 1.0.20. You can adjust the order in either of the following two ways:

  • Download the modified oltp_common.lua file for TiDB and overwrite the /usr/share/sysbench/oltp_common.lua file with it.
  • In /usr/share/sysbench/oltp_common.lua, move the lines 235-240 to be right behind the line 198.

Test TiDB Using Sysbench - 图4

Note

This operation is optional and is only to save the time consumed by data import.

At the command line, enter the following command to start importing data. The config file is the one configured in the previous step:

  1. sysbench --config-file=config oltp_point_select --tables=32 --table-size=10000000 prepare

Warming data and collecting statistics

To warm data, we load data from disk into the block cache of memory. The warmed data has significantly improved the overall performance of the system. It is recommended to warm data once after restarting the cluster.

  1. sysbench --config-file=config oltp_point_select --tables=32 --table-size=10000000 prewarm

Point select test command

  1. sysbench --config-file=config oltp_point_select --tables=32 --table-size=10000000 --db-ps-mode=auto --rand-type=uniform run

Update index test command

  1. sysbench --config-file=config oltp_update_index --tables=32 --table-size=10000000 --db-ps-mode=auto --rand-type=uniform run

Read-only test command

  1. sysbench --config-file=config oltp_read_only --tables=32 --table-size=10000000 --db-ps-mode=auto --rand-type=uniform run

Common issues

TiDB and TiKV are both properly configured under high concurrency, why is the overall performance still low?

This issue often has things to do with the use of a proxy. You can add pressure on single TiDB server, sum each result up and compare the summed result with the result with proxy.

Take HAproxy as an example. The parameter nbproc can increase the number of processes it can start at most. Later versions of HAproxy also support nbthread and cpu-map. All of these can mitigate the negative impact of proxy use on performance.

Under high concurrency, why is the CPU utilization rate of TiKV still low?

Although the overall CPU utilization rate is low for TiKV, the CPU utilization rate of some modules in the cluster might be high.

The maximum concurrency limits for other modules on TiKV, such as storage readpool, coprocessor, and gRPC, can be adjusted through the TiKV configuration file.

The actual CPU usage can be observed through Grafana’s TiKV Thread CPU monitor panel. If there is a bottleneck on the modules, it can be adjusted by increasing the concurrency of the modules.

Given that TiKV has not yet reached the CPU usage bottleneck under high concurrency, why is TiDB’s CPU utilization rate still low?

CPU of NUMA architecture is used on some high-end equipment where cross-CPU access to remote memory will greatly reduce performance. By default, TiDB will use all CPUs of the server, and goroutine scheduling will inevitably lead to cross-CPU memory access.

Therefore, it is recommended to deploy n TiDBs (n is the number of NUMA CPUs) on the server of NUMA architecture, and meanwhile set the TiDB parameter max-procs to a value that is the same as the number of NUMA CPU cores.