METRICS_SUMMARY

The TiDB cluster has many monitoring metrics. To make it easy to detect abnormal monitoring metrics, TiDB 4.0 introduces the following two monitoring summary tables:

  • information_schema.metrics_summary
  • information_schema.metrics_summary_by_label

METRICS_SUMMARY - 图1

Note

The preceding two monitoring summary tables are only applicable to TiDB Self-Hosted and not available on TiDB Cloud.

The two tables summarize all monitoring data for you to check each monitoring metric efficiently. Compared with information_schema.metrics_summary, the information_schema.metrics_summary_by_label table has an additional label column and performs differentiated statistics according to different labels.

  1. USE information_schema;
  2. DESC metrics_summary;
  1. +--------------+--------------+------+------+---------+-------+
  2. | Field | Type | Null | Key | Default | Extra |
  3. +--------------+--------------+------+------+---------+-------+
  4. | METRICS_NAME | varchar(64) | YES | | NULL | |
  5. | QUANTILE | double | YES | | NULL | |
  6. | SUM_VALUE | double(22,6) | YES | | NULL | |
  7. | AVG_VALUE | double(22,6) | YES | | NULL | |
  8. | MIN_VALUE | double(22,6) | YES | | NULL | |
  9. | MAX_VALUE | double(22,6) | YES | | NULL | |
  10. | COMMENT | varchar(256) | YES | | NULL | |
  11. +--------------+--------------+------+------+---------+-------+
  12. 7 rows in set (0.00 sec)

Field description:

  • METRICS_NAME: The monitoring table name.
  • QUANTILE: The percentile. You can specify QUANTILE using SQL statements. For example:
    • select * from metrics_summary where quantile=0.99 specifies viewing the data of the 0.99 percentile.
    • select * from metrics_summary where quantile in (0.80, 0.90, 0.99, 0.999) specifies viewing the data of the 0.8, 0.90, 0.99, 0.999 percentiles at the same time.
  • SUM_VALUE, AVG_VALUE, MIN_VALUE, and MAX_VALUE respectively mean the sum, the average value, the minimum value, and the maximum value.
  • COMMENT: The comment for the corresponding monitoring table.

For example:

To query the three groups of monitoring items with the highest average time consumption in the TiDB cluster within the time range of '2020-03-08 13:23:00', '2020-03-08 13: 33: 00', you can directly query the information_schema.metrics_summary table and use the /*+ time_range() */ hint to specify the time range. The SQL statement is as follows:

  1. SELECT /*+ time_range('2020-03-08 13:23:00','2020-03-08 13:33:00') */ *
  2. FROM information_schema.metrics_summary
  3. WHERE metrics_name LIKE 'tidb%duration'
  4. AND avg_value > 0
  5. AND quantile = 0.99
  6. ORDER BY avg_value DESC
  7. LIMIT 3\G
  1. ***************************[ 1. row ]***************************
  2. METRICS_NAME | tidb_get_token_duration
  3. QUANTILE | 0.99
  4. SUM_VALUE | 8.972509
  5. AVG_VALUE | 0.996945
  6. MIN_VALUE | 0.996515
  7. MAX_VALUE | 0.997458
  8. COMMENT | The quantile of Duration (us) for getting token, it should be small until concurrency limit is reached(second)
  9. ***************************[ 2. row ]***************************
  10. METRICS_NAME | tidb_query_duration
  11. QUANTILE | 0.99
  12. SUM_VALUE | 0.269079
  13. AVG_VALUE | 0.007272
  14. MIN_VALUE | 0.000667
  15. MAX_VALUE | 0.01554
  16. COMMENT | The quantile of TiDB query durations(second)
  17. ***************************[ 3. row ]***************************
  18. METRICS_NAME | tidb_kv_request_duration
  19. QUANTILE | 0.99
  20. SUM_VALUE | 0.170232
  21. AVG_VALUE | 0.004601
  22. MIN_VALUE | 0.000975
  23. MAX_VALUE | 0.013
  24. COMMENT | The quantile of kv requests durations by store

Similarly, the following example queries the metrics_summary_by_label monitoring summary table:

  1. SELECT /*+ time_range('2020-03-08 13:23:00','2020-03-08 13:33:00') */ *
  2. FROM information_schema.metrics_summary_by_label
  3. WHERE metrics_name LIKE 'tidb%duration'
  4. AND avg_value > 0
  5. AND quantile = 0.99
  6. ORDER BY avg_value DESC
  7. LIMIT 10\G
  1. ***************************[ 1. row ]***************************
  2. INSTANCE | 172.16.5.40:10089
  3. METRICS_NAME | tidb_get_token_duration
  4. LABEL |
  5. QUANTILE | 0.99
  6. SUM_VALUE | 8.972509
  7. AVG_VALUE | 0.996945
  8. MIN_VALUE | 0.996515
  9. MAX_VALUE | 0.997458
  10. COMMENT | The quantile of Duration (us) for getting token, it should be small until concurrency limit is reached(second)
  11. ***************************[ 2. row ]***************************
  12. INSTANCE | 172.16.5.40:10089
  13. METRICS_NAME | tidb_query_duration
  14. LABEL | Select
  15. QUANTILE | 0.99
  16. SUM_VALUE | 0.072083
  17. AVG_VALUE | 0.008009
  18. MIN_VALUE | 0.007905
  19. MAX_VALUE | 0.008241
  20. COMMENT | The quantile of TiDB query durations(second)
  21. ***************************[ 3. row ]***************************
  22. INSTANCE | 172.16.5.40:10089
  23. METRICS_NAME | tidb_query_duration
  24. LABEL | Rollback
  25. QUANTILE | 0.99
  26. SUM_VALUE | 0.072083
  27. AVG_VALUE | 0.008009
  28. MIN_VALUE | 0.007905
  29. MAX_VALUE | 0.008241
  30. COMMENT | The quantile of TiDB query durations(second)

The second and third rows of the query results above indicate that the Select and Rollback statements on tidb_query_duration have a long average execution time.

In addition to the example above, you can use the monitoring summary table to quickly find the module with the largest change from the monitoring data by comparing the full link monitoring items of the two time periods, and quickly locate the bottleneck. The following example compares all monitoring items in two periods (where t1 is the baseline) and sorts these items according to the greatest difference:

  • Period t1: ("2020-03-03 17:08:00", "2020-03-03 17:11:00")
  • Period t2: ("2020-03-03 17:18:00", "2020-03-03 17:21:00")

The monitoring items of the two time periods are joined according to METRICS_NAME and sorted according to the difference value. TIME_RANGE is the hint that specifies the query time.

  1. SELECT GREATEST(t1.avg_value,t2.avg_value)/LEAST(t1.avg_value,
  2. t2.avg_value) AS ratio,
  3. t1.metrics_name,
  4. t1.avg_value as t1_avg_value,
  5. t2.avg_value as t2_avg_value,
  6. t2.comment
  7. FROM
  8. (SELECT /*+ time_range("2020-03-03 17:08:00", "2020-03-03 17:11:00")*/ *
  9. FROM information_schema.metrics_summary ) t1
  10. JOIN
  11. (SELECT /*+ time_range("2020-03-03 17:18:00", "2020-03-03 17:21:00")*/ *
  12. FROM information_schema.metrics_summary ) t2
  13. ON t1.metrics_name = t2.metrics_name
  14. ORDER BY ratio DESC LIMIT 10;
  1. +----------------+------------------------------------------+----------------+------------------+---------------------------------------------------------------------------------------------+
  2. | ratio | metrics_name | t1_avg_value | t2_avg_value | comment |
  3. +----------------+------------------------------------------+----------------+------------------+---------------------------------------------------------------------------------------------+
  4. | 5865.59537065 | tidb_slow_query_cop_process_total_time | 0.016333 | 95.804724 | The total time of TiDB slow query statistics with slow query total cop process time(second) |
  5. | 3648.74109023 | tidb_distsql_partial_scan_key_total_num | 10865.666667 | 39646004.4394 | The total num of distsql partial scan key numbers |
  6. | 267.002351165 | tidb_slow_query_cop_wait_total_time | 0.003333 | 0.890008 | The total time of TiDB slow query statistics with slow query total cop wait time(second) |
  7. | 192.43267836 | tikv_cop_total_response_total_size | 2515333.66667 | 484032394.445 | |
  8. | 192.43267836 | tikv_cop_total_response_size_per_seconds | 41922.227778 | 8067206.57408 | |
  9. | 152.780296296 | tidb_distsql_scan_key_total_num | 5304.333333 | 810397.618317 | The total num of distsql scan numbers |
  10. | 126.042290167 | tidb_distsql_execution_total_time | 0.421622 | 53.142143 | The total time of distsql execution(second) |
  11. | 105.164020657 | tikv_cop_scan_details | 134.450733 | 14139.379665 | |
  12. | 105.164020657 | tikv_cop_scan_details_total | 8067.043981 | 848362.77991 | |
  13. | 101.635495394 | tikv_cop_scan_keys_num | 1070.875 | 108838.91113 | |
  14. +----------------+------------------------------------------+----------------+------------------+---------------------------------------------------------------------------------------------+

From the query result above, you can get the following information:

  • tib_slow_query_cop_process_total_time (the time consumption of cop process in TiDB slow queries) in the period t2 is 5,865 times higher than that in period t1.
  • tidb_distsql_partial_scan_key_total_num (the number of keys to scan requested by TiDB’s distsql) in period t2 is 3,648 times higher than that in period t1. During period t2, tidb_slow_query_cop_wait_total_time (the waiting time of Coprocessor requesting to queue up in the TiDB slow query) is 267 times higher than that in period t1.
  • tikv_cop_total_response_size (the size of the TiKV Coprocessor request result) in period t2 is 192 times higher than that in period t1.
  • tikv_cop_scan_details in period t2 (the scan requested by the TiKV Coprocessor) is 105 times higher than that in period t1.

From the result above, you can see that the Coprocessor requests in period t2 are much more than those in period t1. This causes TiKV Coprocessor to be overloaded, and the cop task has to wait. It might be that some large queries appear in period t2 that bring more load.

In fact, during the entire time period from t1 to t2, the go-ycsb pressure test is running. Then 20 tpch queries are running during period t2. So it is the tpch queries that cause many Coprocessor requests.