Latency Breakdown

This document breaks down the latency into metrics and then analyzes it from the user’s perspective from the following aspects:

These analyses provide you with a deep insight into time cost during TiDB SQL queries. This is a guide to TiDB’s critical path diagnosis. Besides, the Diagnosis use cases section introduces how to analyze latency in real use cases.

It’s better to read Performance Analysis and Tuning before this document. Note that when breaking down latency into metrics, the average value of duration or latency is calculated instead of some specific slow queries. Many metrics are shown as histogram, which is a distribution of the duration or latency. To calculate the average latency, you need to use the following sum and count counter.

  1. avg = ${metric_name}_sum / ${metric_name}_count

Metrics described in this document can be read directly from the Prometheus dashboard of TiDB.

General SQL layer

This general SQL layer latency exists on the top level of TiDB and is shared by all SQL queries. The following is the time cost diagram of general SQL layer operation:

Latency Breakdown - 图1

  1. Diagram(
  2. NonTerminal("Token wait duration"),
  3. Choice(
  4. 0,
  5. Comment("Prepared statement"),
  6. NonTerminal("Parse duration"),
  7. ),
  8. OneOrMore(
  9. Sequence(
  10. Choice(
  11. 0,
  12. NonTerminal("Optimize prepared plan duration"),
  13. Sequence(
  14. Comment("Plan cache miss"),
  15. NonTerminal("Compile duration"),
  16. ),
  17. ),
  18. NonTerminal("TSO wait duration"),
  19. NonTerminal("Execution duration"),
  20. ),
  21. Comment("Retry"),
  22. ),
  23. )

The general SQL layer latency can be observed as the e2e duration metric and is calculated as:

  1. e2e duration =
  2. tidb_server_get_token_duration_seconds +
  3. tidb_session_parse_duration_seconds +
  4. tidb_session_compile_duration_seconds +
  5. tidb_session_execute_duration_seconds{type="general"}
  • tidb_server_get_token_duration_seconds records the duration of Token waiting. This is usually less than 1 millisecond and is small enough to be ignored.
  • tidb_session_parse_duration_seconds records the duration of parsing SQL queries to an Abstract Syntax Tree (AST), which can be skipped by PREPARE/EXECUTE statements.
  • tidb_session_compile_duration_seconds records the duration of compiling an AST to an execution plan, which can be skipped by SQL prepared execution plan cache.
  • tidb_session_execute_duration_seconds{type="general"} records the duration of execution, which mixes all types of user queries. This needs to be broken down into fine-grained durations for analyzing performance issues or bottlenecks.

Generally, OLTP (Online Transactional Processing) workload can be divided into read and write queries, which share some critical code. The following sections describe latency in read queries and write queries, which are executed differently.

Read queries

Read queries have only a single process form.

Point get

The following is the time cost diagram of point get operations:

Latency Breakdown - 图2

  1. Diagram(
  2. Choice(
  3. 0,
  4. NonTerminal("Resolve TSO"),
  5. Comment("Read by clustered PK in auto-commit-txn mode or snapshot read"),
  6. ),
  7. Choice(
  8. 0,
  9. NonTerminal("Read handle by index key"),
  10. Comment("Read by clustered PK, encode handle by key"),
  11. ),
  12. NonTerminal("Read value by handle"),
  13. )

During point get, the tidb_session_execute_duration_seconds{type="general"} duration is calculated as:

  1. tidb_session_execute_duration_seconds{type="general"} =
  2. pd_client_cmd_handle_cmds_duration_seconds{type="wait"} +
  3. read handle duration +
  4. read value duration

pd_client_cmd_handle_cmds_duration_seconds{type="wait"} records the duration of fetching TSO (Timestamp Oracle) from PD. When reading in an auto-commit transaction mode with a clustered primary index or from a snapshot, the value will be zero.

The read handle duration and read value duration are calculated as:

  1. read handle duration = read value duration =
  2. tidb_tikvclient_txn_cmd_duration_seconds{type="get"} =
  3. send request duration =
  4. tidb_tikvclient_request_seconds{type="Get"} =
  5. tidb_tikvclient_batch_wait_duration +
  6. tidb_tikvclient_batch_send_latency +
  7. tikv_grpc_msg_duration_seconds{type="kv_get"} +
  8. tidb_tikvclient_rpc_net_latency_seconds{store="?"}

The tidb_tikvclient_request_seconds{type="Get"} records a duration of get requests which are sent directly to TiKV via a batched gRPC wrapper. For more details about the preceding batch client duration, such as tidb_tikvclient_batch_wait_duration, tidb_tikvclient_batch_send_latency, and tidb_tikvclient_rpc_net_latency_seconds{store="?"}, refer to the Batch client section.

The tikv_grpc_msg_duration_seconds{type="kv_get"} duration is calculated as:

  1. tikv_grpc_msg_duration_seconds{type="kv_get"} =
  2. tikv_storage_engine_async_request_duration_seconds{type="snapshot"} +
  3. tikv_engine_seek_micro_seconds{type="seek_average"} +
  4. read value duration +
  5. read value duration(non-short value)

At this time, requests are in TiKV. TiKV processes get requests by one seek and one or two read actions (short values are encoded in a write column family, and reading it once is enough). TiKV gets a snapshot before processing the read request. For more details about the TiKV snapshot duration, refer to the TiKV snapshot section.

The read value duration(from disk) is calculated as:

  1. read value duration(from disk) =
  2. sum(rate(tikv_storage_rocksdb_perf{metric="block_read_time",req="get/batch_get_command"})) / sum(rate(tikv_storage_rocksdb_perf{metric="block_read_count",req="get/batch_get_command"}))

TiKV uses RocksDB as its storage engine. When the required value is missing from the block cache, TiKV needs to load the value from the disk. For tikv_storage_rocksdb_perf, the get request can be either get or batch_get_command.

Batch point get

The following is the time cost diagram of batch point get operations:

Latency Breakdown - 图3

  1. Diagram(
  2. NonTerminal("Resolve TSO"),
  3. Choice(
  4. 0,
  5. NonTerminal("Read all handles by index keys"),
  6. Comment("Read by clustered PK, encode handle by keys"),
  7. ),
  8. NonTerminal("Read values by handles"),
  9. )

During batch point get, the tidb_session_execute_duration_seconds{type="general"} is calculated as:

  1. tidb_session_execute_duration_seconds{type="general"} =
  2. pd_client_cmd_handle_cmds_duration_seconds{type="wait"} +
  3. read handles duration +
  4. read values duration

The process of batch point get is almost the same as Point get except that batch point get reads multiple values at the same time.

The read handles duration and read values duration are calculated as:

  1. read handles duration = read values duration =
  2. tidb_tikvclient_txn_cmd_duration_seconds{type="batch_get"} =
  3. send request duration =
  4. tidb_tikvclient_request_seconds{type="BatchGet"} =
  5. tidb_tikvclient_batch_wait_duration(transaction) +
  6. tidb_tikvclient_batch_send_latency(transaction) +
  7. tikv_grpc_msg_duration_seconds{type="kv_batch_get"} +
  8. tidb_tikvclient_rpc_net_latency_seconds{store="?"}(transaction)

For more details about the preceding batch client duration, such as tidb_tikvclient_batch_wait_duration(transaction), tidb_tikvclient_batch_send_latency(transaction), and tidb_tikvclient_rpc_net_latency_seconds{store="?"}(transaction), refer to the Batch client section.

The tikv_grpc_msg_duration_seconds{type="kv_batch_get"} duration is calculated as:

  1. tikv_grpc_msg_duration_seconds{type="kv_batch_get"} =
  2. tikv_storage_engine_async_request_duration_seconds{type="snapshot"} +
  3. n * (
  4. tikv_engine_seek_micro_seconds{type="seek_max"} +
  5. read value duration +
  6. read value duration(non-short value)
  7. )
  8. read value duration(from disk) =
  9. sum(rate(tikv_storage_rocksdb_perf{metric="block_read_time",req="batch_get"})) / sum(rate(tikv_storage_rocksdb_perf{metric="block_read_count",req="batch_get"}))

After getting a snapshot, TiKV reads multiple values from the same snapshot. The read duration is the same as Point get. When TiKV loads data from disk, the average duration can be calculated by tikv_storage_rocksdb_perf with req="batch_get".

Table scan & Index scan

The following is the time cost diagram of table scan and index scan operations:

Latency Breakdown - 图4

  1. Diagram(
  2. Stack(
  3. NonTerminal("Resolve TSO"),
  4. NonTerminal("Load region cache for related table/index ranges"),
  5. OneOrMore(
  6. NonTerminal("Wait for result"),
  7. Comment("Next loop: drain the result"),
  8. ),
  9. ),
  10. )

During table scan and index scan, the tidb_session_execute_duration_seconds{type="general"} duration is calculated as:

  1. tidb_session_execute_duration_seconds{type="general"} =
  2. pd_client_cmd_handle_cmds_duration_seconds{type="wait"} +
  3. req_per_copr * (
  4. tidb_distsql_handle_query_duration_seconds{sql_type="general"}
  5. )
  6. tidb_distsql_handle_query_duration_seconds{sql_type="general"} <= send request duration

Table scan and index scan are processed in the same way. req_per_copr is the distributed task count. Because coprocessor execution and data responding to client are in different threads, tidb_distsql_handle_query_duration_seconds{sql_type="general"} is the wait time and it is less than the send request duration.

The send request duration and req_per_copr are calculated as:

  1. send request duration =
  2. tidb_tikvclient_batch_wait_duration +
  3. tidb_tikvclient_batch_send_latency +
  4. tikv_grpc_msg_duration_seconds{type="coprocessor"} +
  5. tidb_tikvclient_rpc_net_latency_seconds{store="?"}
  6. tikv_grpc_msg_duration_seconds{type="coprocessor"} =
  7. tikv_coprocessor_request_wait_seconds{type="snapshot"} +
  8. tikv_coprocessor_request_wait_seconds{type="schedule"} +
  9. tikv_coprocessor_request_handler_build_seconds{type="index/select"} +
  10. tikv_coprocessor_request_handle_seconds{type="index/select"}
  11. req_per_copr = rate(tidb_distsql_handle_query_duration_seconds_count) / rate(tidb_distsql_scan_keys_partial_num_count)

In TiKV, the table scan type is select and the index scan type is index. The details of select and index type duration are the same.

Index look up

The following is the time cost diagram of index look up operations:

Latency Breakdown - 图5

  1. Diagram(
  2. Stack(
  3. NonTerminal("Resolve TSO"),
  4. NonTerminal("Load region cache for related index ranges"),
  5. OneOrMore(
  6. Sequence(
  7. NonTerminal("Wait for index scan result"),
  8. NonTerminal("Wait for table scan result"),
  9. ),
  10. Comment("Next loop: drain the result"),
  11. ),
  12. ),
  13. )

During index look up, the tidb_session_execute_duration_seconds{type="general"} duration is calculated as:

  1. tidb_session_execute_duration_seconds{type="general"} =
  2. pd_client_cmd_handle_cmds_duration_seconds{type="wait"} +
  3. req_per_copr * (
  4. tidb_distsql_handle_query_duration_seconds{sql_type="general"}
  5. ) +
  6. req_per_copr * (
  7. tidb_distsql_handle_query_duration_seconds{sql_type="general"}
  8. )
  9. req_per_copr = rate(tidb_distsql_handle_query_duration_seconds_count) / rate(tidb_distsql_scan_keys_partial_num_count)

An index look up combines index scan and table scan, which are processed in a pipeline.

Write queries

Write queries are much more complex than read queries. There are some variants of write queries. The following is the time cost diagram of write queries operations:

Latency Breakdown - 图6

  1. Diagram(
  2. NonTerminal("Execute write query"),
  3. Choice(
  4. 0,
  5. NonTerminal("Pessimistic lock keys"),
  6. Comment("bypass in optimistic transaction"),
  7. ),
  8. Choice(
  9. 0,
  10. NonTerminal("Auto Commit Transaction"),
  11. Comment("bypass in non-auto-commit or explicit transaction"),
  12. ),
  13. )
Pessimistic transactionOptimistic transaction
Auto-commitexecute + lock + commitexecute + commit
Non-auto-commitexecute + lockexecute

A write query is divided into the following three phases:

  • execute phase: execute and write mutation into the memory of TiDB.
  • lock phase: acquire pessimistic locks for the execution result.
  • commit phase: commit the transaction via the two-phase commit protocol (2PC).

In the execute phase, TiDB manipulates data in memory and the main latency comes from reading the required data. For update and delete queries, TiDB reads data from TiKV first, and then updates or deletes the row in memory.

The exception is lock-time read operations (SELECT FOR UPDATE) with point get and batch point get, which perform read and lock in a single Remote Procedure Call (RPC).

Lock-time point get

The following is the time cost diagram of lock-time point get operations:

Latency Breakdown - 图7

  1. Diagram(
  2. Choice(
  3. 0,
  4. Sequence(
  5. NonTerminal("Read handle key by index key"),
  6. NonTerminal("Lock index key"),
  7. ),
  8. Comment("Clustered index"),
  9. ),
  10. NonTerminal("Lock handle key"),
  11. NonTerminal("Read value from pessimistic lock cache"),
  12. )

During lock-time point get, the execution(clustered PK) and execution(non-clustered PK or UK) duration are calculated as:

  1. execution(clustered PK) =
  2. tidb_tikvclient_txn_cmd_duration_seconds{type="lock_keys"}
  3. execution(non-clustered PK or UK) =
  4. 2 * tidb_tikvclient_txn_cmd_duration_seconds{type="lock_keys"}

Lock-time point get locks the key and returns its value. Compared with the lock phase after execution, this saves 1 round trip. The duration of the lock-time point get can be treated the same as Lock duration.

Lock-time batch point get

The following is the time cost diagram of lock-time batch point get operations:

Latency Breakdown - 图8

  1. Diagram(
  2. Choice(
  3. 0,
  4. NonTerminal("Read handle keys by index keys"),
  5. Comment("Clustered index"),
  6. ),
  7. NonTerminal("Lock index and handle keys"),
  8. NonTerminal("Read values from pessimistic lock cache"),
  9. )

During lock-time batch point get, the execution(clustered PK) and execution(non-clustered PK or UK) duration are calculated as:

  1. execution(clustered PK) =
  2. tidb_tikvclient_txn_cmd_duration_seconds{type="lock_keys"}
  3. execution(non-clustered PK or UK) =
  4. tidb_tikvclient_txn_cmd_duration_seconds{type="batch_get"} +
  5. tidb_tikvclient_txn_cmd_duration_seconds{type="lock_keys"}

The execution of the lock-time batch point get is similar to the Lock-time point get except that the lock-time batch point get reads multiple values in a single RPC. For more details about the tidb_tikvclient_txn_cmd_duration_seconds{type="batch_get"} duration, refer to the Batch point get section.

Lock

This section describes the lock duration.

  1. round = ceil(
  2. sum(rate(tidb_tikvclient_txn_regions_num_sum{type="2pc_pessimistic_lock"})) /
  3. sum(rate(tidb_tikvclient_txn_regions_num_count{type="2pc_pessimistic_lock"})) /
  4. committer-concurrency
  5. )
  6. lock = tidb_tikvclient_txn_cmd_duration_seconds{type="lock_keys"} =
  7. round * tidb_tikvclient_request_seconds{type="PessimisticLock"}

Locks are acquired through the 2PC structure, which has a flow control mechanism. The flow control limits concurrent on-the-fly requests by committer-concurrency (default value is 128). For simplicity, the flow control can be treated as an amplification of request latency (round).

The tidb_tikvclient_request_seconds{type="PessimisticLock"} is calculated as:

  1. tidb_tikvclient_request_seconds{type="PessimisticLock"} =
  2. tidb_tikvclient_batch_wait_duration +
  3. tidb_tikvclient_batch_send_latency +
  4. tikv_grpc_msg_duration_seconds{type="kv_pessimistic_lock"} +
  5. tidb_tikvclient_rpc_net_latency_seconds{store="?"}

For more details about the preceding batch client duration, such as tidb_tikvclient_batch_wait_duration, tidb_tikvclient_batch_send_latency, and tidb_tikvclient_rpc_net_latency_seconds{store="?"}, refer to the Batch client section.

The tikv_grpc_msg_duration_seconds{type="kv_pessimistic_lock"} duration is calculated as:

  1. tikv_grpc_msg_duration_seconds{type="kv_pessimistic_lock"} =
  2. tikv_scheduler_latch_wait_duration_seconds{type="acquire_pessimistic_lock"} +
  3. tikv_storage_engine_async_request_duration_seconds{type="snapshot"} +
  4. (lock in-mem key count + lock on-disk key count) * lock read duration +
  5. lock on-disk key count / (lock in-mem key count + lock on-disk key count) *
  6. lock write duration
  • Since TiDB v6.0, TiKV uses in-memory pessimistic lock by default. In-memory pessimistic lock bypass the async write process.

  • tikv_storage_engine_async_request_duration_seconds{type="snapshot"} is a snapshot type duration. For more details, refer to the TiKV Snapshot section.

  • The lock in-mem key count and lock on-disk key count are calculated as:

    1. lock in-mem key count =
    2. sum(rate(tikv_in_memory_pessimistic_locking{result="success"})) /
    3. sum(rate(tikv_grpc_msg_duration_seconds_count{type="kv_pessimistic_lock"}}))
    4. lock on-disk key count =
    5. sum(rate(tikv_in_memory_pessimistic_locking{result="full"})) /
    6. sum(rate(tikv_grpc_msg_duration_seconds_count{type="kv_pessimistic_lock"}}))

    The count of in-memory and on-disk locked keys can be calculated by the in-memory lock counter. TiKV reads the keys’ values before acquiring locks, and the read duration can be calculated by RocksDB performance context.

    1. lock read duration(from disk) =
    2. sum(rate(tikv_storage_rocksdb_perf{metric="block_read_time",req="acquire_pessimistic_lock"})) / sum(rate(tikv_storage_rocksdb_perf{metric="block_read_count",req="acquire_pessimistic_lock"}))
  • lock write duration is the duration of writing on-disk lock. For more details, refer to the Async write section.

Commit

This section describes the commit duration. The following is the time cost diagram of commit operations:

Latency Breakdown - 图9

  1. Diagram(
  2. Stack(
  3. Sequence(
  4. Choice(
  5. 0,
  6. Comment("use 2pc or causal consistency"),
  7. NonTerminal("Get min-commit-ts"),
  8. ),
  9. Optional("Async prewrite binlog"),
  10. NonTerminal("Prewrite mutations"),
  11. Optional("Wait prewrite binlog result"),
  12. ),
  13. Sequence(
  14. Choice(
  15. 1,
  16. Comment("1pc"),
  17. Sequence(
  18. Comment("2pc"),
  19. NonTerminal("Get commit-ts"),
  20. NonTerminal("Check schema"),
  21. NonTerminal("Commit PK mutation"),
  22. ),
  23. Sequence(
  24. Comment("async-commit"),
  25. NonTerminal("Commit mutations asynchronously"),
  26. ),
  27. ),
  28. Choice(
  29. 0,
  30. Comment("committed"),
  31. NonTerminal("Async cleanup"),
  32. ),
  33. Optional("Commit binlog"),
  34. ),
  35. ),
  36. )

The duration of the commit phase is calculated as:

  1. commit =
  2. Get_latest_ts_time +
  3. Prewrite_time +
  4. Get_commit_ts_time +
  5. Commit_time
  6. Get_latest_ts_time = Get_commit_ts_time =
  7. pd_client_cmd_handle_cmds_duration_seconds{type="wait"}
  8. prewrite_round = ceil(
  9. sum(rate(tidb_tikvclient_txn_regions_num_sum{type="2pc_prewrite"})) /
  10. sum(rate(tidb_tikvclient_txn_regions_num_count{type="2pc_prewrite"})) /
  11. committer-concurrency
  12. )
  13. commit_round = ceil(
  14. sum(rate(tidb_tikvclient_txn_regions_num_sum{type="2pc_commit"})) /
  15. sum(rate(tidb_tikvclient_txn_regions_num_count{type="2pc_commit"})) /
  16. committer-concurrency
  17. )
  18. Prewrite_time =
  19. prewrite_round * tidb_tikvclient_request_seconds{type="Prewrite"}
  20. Commit_time =
  21. commit_round * tidb_tikvclient_request_seconds{type="Commit"}

The commit duration can be broken down into four metrics:

  • Get_latest_ts_time records the duration of getting latest TSO in async-commit or single-phase commit (1PC) transaction.
  • Prewrite_time records the duration of the prewrite phase.
  • Get_commit_ts_time records the duration of common 2PC transaction.
  • Commit_time records the duration of the commit phase. Note that an async-commit or 1PC transaction does not have this phase.

Like pessimistic lock, flow control acts as an amplification of latency (prewrite_round and commit_round in the preceding formula).

The tidb_tikvclient_request_seconds{type="Prewrite"} and tidb_tikvclient_request_seconds{type="Commit"} duration are calculated as:

  1. tidb_tikvclient_request_seconds{type="Prewrite"} =
  2. tidb_tikvclient_batch_wait_duration +
  3. tidb_tikvclient_batch_send_latency +
  4. tikv_grpc_msg_duration_seconds{type="kv_prewrite"} +
  5. tidb_tikvclient_rpc_net_latency_seconds{store="?"}
  6. tidb_tikvclient_request_seconds{type="Commit"} =
  7. tidb_tikvclient_batch_wait_duration +
  8. tidb_tikvclient_batch_send_latency +
  9. tikv_grpc_msg_duration_seconds{type="kv_commit"} +
  10. tidb_tikvclient_rpc_net_latency_seconds{store="?"}

For more details about the preceding batch client duration, such as tidb_tikvclient_batch_wait_duration, tidb_tikvclient_batch_send_latency, and tidb_tikvclient_rpc_net_latency_seconds{store="?"}, refer to the Batch client section.

The tikv_grpc_msg_duration_seconds{type="kv_prewrite"} is calculated as:

  1. tikv_grpc_msg_duration_seconds{type="kv_prewrite"} =
  2. prewrite key count * prewrite read duration +
  3. prewrite write duration
  4. prewrite key count =
  5. sum(rate(tikv_scheduler_kv_command_key_write_sum{type="prewrite"})) /
  6. sum(rate(tikv_scheduler_kv_command_key_write_count{type="prewrite"}))
  7. prewrite read duration(from disk) =
  8. sum(rate(tikv_storage_rocksdb_perf{metric="block_read_time",req="prewrite"})) / sum(rate(tikv_storage_rocksdb_perf{metric="block_read_count",req="prewrite"}))

Like locks in TiKV, prewrite is processed in read and write phases. The read duration can be calculated from the RocksDB performance context. For more details about the write duration, refer to the Async write section.

The tikv_grpc_msg_duration_seconds{type="kv_commit"} is calculated as:

  1. tikv_grpc_msg_duration_seconds{type="kv_commit"} =
  2. commit key count * commit read duration +
  3. commit write duration
  4. commit key count =
  5. sum(rate(tikv_scheduler_kv_command_key_write_sum{type="commit"})) /
  6. sum(rate(tikv_scheduler_kv_command_key_write_count{type="commit"}))
  7. commit read duration(from disk) =
  8. sum(rate(tikv_storage_rocksdb_perf{metric="block_read_time",req="commit"})) / sum(rate(tikv_storage_rocksdb_perf{metric="block_read_count",req="commit"})) (storage)

The duration of kv_commit is almost the same as kv_prewrite. For more details about the write duration, refer to the Async write section.

Batch client

The following is the time cost diagram of the batch client:

Latency Breakdown - 图10

  1. Diagram(
  2. NonTerminal("Get conn pool to the target store"),
  3. Choice(
  4. 0,
  5. Sequence(
  6. Comment("Batch enabled"),
  7. NonTerminal("Push request to channel"),
  8. NonTerminal("Wait response"),
  9. ),
  10. Sequence(
  11. NonTerminal("Get conn from pool"),
  12. NonTerminal("Call RPC"),
  13. Choice(
  14. 0,
  15. Comment("Unary call"),
  16. NonTerminal("Recv first"),
  17. ),
  18. ),
  19. ),
  20. )
  • The overall duration of sending a request is observed as tidb_tikvclient_request_seconds.
  • RPC client maintains connection pools (named ConnArray) to each store, and each pool has a BatchConn with a batch request (send) channel.
  • Batch is enabled when the store is TiKV and batch size is positive, which is true in most cases.
  • The size of batch request channel is tikv-client.max-batch-size (default is 128), the duration of enqueue is observed as tidb_tikvclient_batch_wait_duration.
  • There are three kinds of stream requests: CmdBatchCop, CmdCopStream, and CmdMPPConn, which involve an additional recv() call to fetch the first response from the stream.

Though there is still some latency missed observed, the tidb_tikvclient_request_seconds can be calculated approximately as:

  1. tidb_tikvclient_request_seconds{type="?"} =
  2. tidb_tikvclient_batch_wait_duration +
  3. tidb_tikvclient_batch_send_latency +
  4. tikv_grpc_msg_duration_seconds{type="kv_?"} +
  5. tidb_tikvclient_rpc_net_latency_seconds{store="?"}
  • tidb_tikvclient_batch_wait_duration records the waiting duration in the batch system.
  • tidb_tikvclient_batch_send_latency records the encode duration in the batch system.
  • tikv_grpc_msg_duration_seconds{type="kv_?"} is the TiKV processing duration.
  • tidb_tikvclient_rpc_net_latency_seconds records the network latency.

TiKV snapshot

The following is the time cost diagram of TiKV snapshot operations:

Latency Breakdown - 图11

  1. Diagram(
  2. Choice(
  3. 0,
  4. Comment("Local Read"),
  5. Sequence(
  6. NonTerminal("Propose Wait"),
  7. NonTerminal("Read index Read Wait"),
  8. ),
  9. ),
  10. NonTerminal("Fetch A Snapshot From KV Engine"),
  11. )

The overall duration of a TiKV snapshot is observed as tikv_storage_engine_async_request_duration_seconds{type="snapshot"} and is calculated as:

  1. tikv_storage_engine_async_request_duration_seconds{type="snapshot"} =
  2. tikv_coprocessor_request_wait_seconds{type="snapshot"} =
  3. tikv_raftstore_request_wait_time_duration_secs +
  4. tikv_raftstore_commit_log_duration_seconds +
  5. get snapshot from rocksdb duration

When leader lease is expired, TiKV proposes a read index command before getting a snapshot from RocksDB. tikv_raftstore_request_wait_time_duration_secs and tikv_raftstore_commit_log_duration_seconds are the duration of committing read index command.

Since getting a snapshot from RocksDB is usually a fast operation, the get snapshot from rocksdb duration is ignored.

Async write

Async write is the process that TiKV writes data into the Raft-based replicated state machine asynchronously with a callback.

  • The following is the time cost diagram of async write operations when the asynchronous IO is disabled:

    Latency Breakdown - 图12

    1. Diagram(
    2. NonTerminal("Propose Wait"),
    3. NonTerminal("Process Command"),
    4. Choice(
    5. 0,
    6. Sequence(
    7. NonTerminal("Wait Current Batch"),
    8. NonTerminal("Write to Log Engine"),
    9. ),
    10. Sequence(
    11. NonTerminal("RaftMsg Send Wait"),
    12. NonTerminal("Commit Log Wait"),
    13. ),
    14. ),
    15. NonTerminal("Apply Wait"),
    16. NonTerminal("Apply Log"),
    17. )
  • The following is the time cost diagram of async write operations when the asynchronous IO is enabled:

    Latency Breakdown - 图13

    1. Diagram(
    2. NonTerminal("Propose Wait"),
    3. NonTerminal("Process Command"),
    4. Choice(
    5. 0,
    6. NonTerminal("Wait Until Persisted by Write Worker"),
    7. Sequence(
    8. NonTerminal("RaftMsg Send Wait"),
    9. NonTerminal("Commit Log Wait"),
    10. ),
    11. ),
    12. NonTerminal("Apply Wait"),
    13. NonTerminal("Apply Log"),
    14. )

The async write duration is calculated as:

  1. async write duration(async io disabled) =
  2. propose +
  3. async io disabled commit +
  4. tikv_raftstore_apply_wait_time_duration_secs +
  5. tikv_raftstore_apply_log_duration_seconds
  6. async write duration(async io enabled) =
  7. propose +
  8. async io enabled commit +
  9. tikv_raftstore_apply_wait_time_duration_secs +
  10. tikv_raftstore_apply_log_duration_seconds

Async write can be broken down into the following three phases:

  • Propose
  • Commit
  • Apply: tikv_raftstore_apply_wait_time_duration_secs + tikv_raftstore_apply_log_duration_seconds in the preceding formula

The duration of the propose phase is calculated as:

  1. propose =
  2. propose wait duration +
  3. propose duration
  4. propose wait duration =
  5. tikv_raftstore_store_wf_batch_wait_duration_seconds
  6. propose duration =
  7. tikv_raftstore_store_wf_send_to_queue_duration_seconds -
  8. tikv_raftstore_store_wf_batch_wait_duration_seconds

The Raft process is recorded in a waterfall manner. So the propose duration is calculated from the difference between the two metrics.

The duration of the commit phase is calculated as:

  1. async io disabled commit = max(
  2. persist log locally duration,
  3. replicate log duration
  4. )
  5. async io enabled commit = max(
  6. wait by write worker duration,
  7. replicate log duration
  8. )

Since v5.3.0, TiKV supports Async IO Raft (write Raft log by a StoreWriter thread pool). The Async IO Raft is only enabled when the store-io-pool-size is set to a positive value, which changes the process of commit. The persist log locally duration and wait by write worker duration are calculated as:

  1. persist log locally duration =
  2. batch wait duration +
  3. write to raft db duration
  4. batch wait duration =
  5. tikv_raftstore_store_wf_before_write_duration_seconds -
  6. tikv_raftstore_store_wf_send_to_queue_duration_seconds
  7. write to raft db duration =
  8. tikv_raftstore_store_wf_write_end_duration_seconds -
  9. tikv_raftstore_store_wf_before_write_duration_seconds
  10. wait by write worker duration =
  11. tikv_raftstore_store_wf_persist_duration_seconds -
  12. tikv_raftstore_store_wf_send_to_queue_duration_seconds

The difference between with and without Async IO is the duration of persisting logs locally. With Async IO, the duration of persisting log locally can be calculated from the waterfall metrics directly (skip the batch wait duration).

The replicate log duration records the duration of log persisted in quorum peers, which contains an RPC duration and the duration of log persisting in the majority. The replicate log duration is calculated as:

  1. replicate log duration =
  2. raftmsg send wait duration +
  3. commit log wait duration
  4. raftmsg send wait duration =
  5. tikv_raftstore_store_wf_send_proposal_duration_seconds -
  6. tikv_raftstore_store_wf_send_to_queue_duration_seconds
  7. commit log wait duration =
  8. tikv_raftstore_store_wf_commit_log_duration -
  9. tikv_raftstore_store_wf_send_proposal_duration_seconds

Raft DB

The following is the time cost diagram of Raft DB operations:

Latency Breakdown - 图14

  1. Diagram(
  2. NonTerminal("Wait for Writer Leader"),
  3. NonTerminal("Write and Sync Log"),
  4. NonTerminal("Apply Log to Memtable"),
  5. )
  1. write to raft db duration = raft db write duration
  2. commit log wait duration >= raft db write duration
  3. raft db write duration(raft engine enabled) =
  4. raft_engine_write_preprocess_duration_seconds +
  5. raft_engine_write_leader_duration_seconds +
  6. raft_engine_write_apply_duration_seconds
  7. raft db write duration(raft engine disabled) =
  8. tikv_raftstore_store_perf_context_time_duration_secs{type="write_thread_wait"} +
  9. tikv_raftstore_store_perf_context_time_duration_secs{type="write_scheduling_flushes_compactions_time"} +
  10. tikv_raftstore_store_perf_context_time_duration_secs{type="write_wal_time"} +
  11. tikv_raftstore_store_perf_context_time_duration_secs{type="write_memtable_time"}

Because commit log wait duration is the longest duration of quorum peers, it might be larger than raft db write duration.

Since v6.1.0, TiKV uses Raft Engine as its default log storage engine, which changes the process of writing log.

KV DB

The following is the time cost diagram of KV DB operations:

Latency Breakdown - 图15

  1. Diagram(
  2. NonTerminal("Wait for Writer Leader"),
  3. NonTerminal("Preprocess"),
  4. Choice(
  5. 0,
  6. Comment("No Need to Switch"),
  7. NonTerminal("Switch WAL or Memtable"),
  8. ),
  9. NonTerminal("Write and Sync WAL"),
  10. NonTerminal("Apply to Memtable"),
  11. )
  1. tikv_raftstore_apply_log_duration_seconds =
  2. tikv_raftstore_apply_perf_context_time_duration_secs{type="write_thread_wait"} +
  3. tikv_raftstore_apply_perf_context_time_duration_secs{type="write_scheduling_flushes_compactions_time"} +
  4. tikv_raftstore_apply_perf_context_time_duration_secs{type="write_wal_time"} +
  5. tikv_raftstore_apply_perf_context_time_duration_secs{type="write_memtable_time"}

In the async write process, committed logs need to be applied to the KV DB. The applying duration can be calculated from the RocksDB performance context.

Diagnosis use cases

The preceding sections explain the details about time cost metrics during querying. This section introduces common procedures of metrics analysis when you encounter slow read or write queries. All metrics can be checked in the Database Time panel of Performance Overview Dashboard.

Slow read queries

If SELECT statements account for a significant portion of the database time, you can assume that TiDB is slow at read queries.

The execution plans of slow queries can be found in the Top SQL statements panel of TiDB Dashboard. To investigate the time costs of slow read queries, you can analyze Point get, Batch point get and some simple coprocessor queries according to the preceding descriptions.

Slow write queries

Before investigating slow writes, you need to troubleshoot the cause of the conflicts by checking tikv_scheduler_latch_wait_duration_seconds_sum{type="acquire_pessimistic_lock"} by (instance):

  • If this metric is high in some specific TiKV instances, there might be conflicts in hot Regions.
  • If this metric is high across all instances, there might be conflicts in the application.

After confirming the cause of conflicts from the application, you can investigate slow write queries by analyzing the duration of Lock and Commit.