Deploy Monitoring Services for the TiDB Cluster

This document is intended for users who want to manually deploy TiDB monitoring and alert services.

If you deploy the TiDB cluster using TiUP, the monitoring and alert services are automatically deployed, and no manual deployment is needed.

Deploy Prometheus and Grafana

Assume that the TiDB cluster topology is as follows:

NameHost IPServices
Node1192.168.199.113PD1, TiDB, node_export, Prometheus, Grafana
Node2192.168.199.114PD2, node_export
Node3192.168.199.115PD3, node_export
Node4192.168.199.116TiKV1, node_export
Node5192.168.199.117TiKV2, node_export
Node6192.168.199.118TiKV3, node_export

Step 1: Download the binary package

  1. # Downloads the package.
  2. wget https://download.pingcap.org/prometheus-2.27.1.linux-amd64.tar.gz
  3. wget https://download.pingcap.org/node_exporter-v1.3.1-linux-amd64.tar.gz
  4. wget https://download.pingcap.org/grafana-7.5.11.linux-amd64.tar.gz
  1. # Extracts the package.
  2. tar -xzf prometheus-2.27.1.linux-amd64.tar.gz
  3. tar -xzf node_exporter-v1.3.1-linux-amd64.tar.gz
  4. tar -xzf grafana-7.5.11.linux-amd64.tar.gz

Step 2: Start node_exporter on Node1, Node2, Node3, and Node4

  1. cd node_exporter-v1.3.1-linux-amd64
  2. # Starts the node_exporter service.
  3. $ ./node_exporter --web.listen-address=":9100" \
  4. --log.level="info" &

Step 3: Start Prometheus on Node1

Edit the Prometheus configuration file:

  1. cd prometheus-2.27.1.linux-amd64 &&
  2. vi prometheus.yml
  1. ...
  2. global:
  3. scrape_interval: 15s # By default, scrape targets every 15 seconds.
  4. evaluation_interval: 15s # By default, scrape targets every 15 seconds.
  5. # scrape_timeout is set to the global default value (10s).
  6. external_labels:
  7. cluster: 'test-cluster'
  8. monitor: "prometheus"
  9. scrape_configs:
  10. - job_name: 'overwritten-nodes'
  11. honor_labels: true # Do not overwrite job & instance labels.
  12. static_configs:
  13. - targets:
  14. - '192.168.199.113:9100'
  15. - '192.168.199.114:9100'
  16. - '192.168.199.115:9100'
  17. - '192.168.199.116:9100'
  18. - '192.168.199.117:9100'
  19. - '192.168.199.118:9100'
  20. - job_name: 'tidb'
  21. honor_labels: true # Do not overwrite job & instance labels.
  22. static_configs:
  23. - targets:
  24. - '192.168.199.113:10080'
  25. - job_name: 'pd'
  26. honor_labels: true # Do not overwrite job & instance labels.
  27. static_configs:
  28. - targets:
  29. - '192.168.199.113:2379'
  30. - '192.168.199.114:2379'
  31. - '192.168.199.115:2379'
  32. - job_name: 'tikv'
  33. honor_labels: true # Do not overwrite job & instance labels.
  34. static_configs:
  35. - targets:
  36. - '192.168.199.116:20180'
  37. - '192.168.199.117:20180'
  38. - '192.168.199.118:20180'
  39. ...

Start the Prometheus service:

  1. $ ./prometheus \
  2. --config.file="./prometheus.yml" \
  3. --web.listen-address=":9090" \
  4. --web.external-url="http://192.168.199.113:9090/" \
  5. --web.enable-admin-api \
  6. --log.level="info" \
  7. --storage.tsdb.path="./data.metrics" \
  8. --storage.tsdb.retention="15d" &

Step 4: Start Grafana on Node1

Edit the Grafana configuration file:

  1. cd grafana-7.5.11 &&
  2. vi conf/grafana.ini
  3. ...
  4. [paths]
  5. data = ./data
  6. logs = ./data/log
  7. plugins = ./data/plugins
  8. [server]
  9. http_port = 3000
  10. domain = 192.168.199.113
  11. [database]
  12. [session]
  13. [analytics]
  14. check_for_updates = true
  15. [security]
  16. admin_user = admin
  17. admin_password = admin
  18. [snapshots]
  19. [users]
  20. [auth.anonymous]
  21. [auth.basic]
  22. [auth.ldap]
  23. [smtp]
  24. [emails]
  25. [log]
  26. mode = file
  27. [log.console]
  28. [log.file]
  29. level = info
  30. format = text
  31. [log.syslog]
  32. [event_publisher]
  33. [dashboards.json]
  34. enabled = false
  35. path = ./data/dashboards
  36. [metrics]
  37. [grafana_net]
  38. url = https://grafana.net
  39. ...

Start the Grafana service:

  1. ./bin/grafana-server \
  2. --config="./conf/grafana.ini" &

Configure Grafana

This section describes how to configure Grafana.

Step 1: Add a Prometheus data source

  1. Log in to the Grafana Web interface.

    Deploy Monitoring Services - 图1

    Note

    For the Change Password step, you can choose Skip.

  2. In the Grafana sidebar menu, click Data Source within the Configuration.

  3. Click Add data source.

  4. Specify the data source information.

    • Specify a Name for the data source.
    • For Type, select Prometheus.
    • For URL, specify the Prometheus address.
    • Specify other fields as needed.
  5. Click Add to save the new data source.

Step 2: Import a Grafana dashboard

To import a Grafana dashboard for the PD server, the TiKV server, and the TiDB server, take the following steps respectively:

  1. Click the Grafana logo to open the sidebar menu.

  2. In the sidebar menu, click Dashboards -> Import to open the Import Dashboard window.

  3. Click Upload .json File to upload a JSON file (Download TiDB Grafana configuration files from pingcap/tidb, tikv/tikv, and tikv/pd).

    Deploy Monitoring Services - 图2

    Note

    For the TiKV, PD, and TiDB dashboards, the corresponding JSON files are tikv_summary.json, tikv_details.json, tikv_trouble_shooting.json, pd.json, tidb.json, and tidb_summary.json.

  4. Click Load.

  5. Select a Prometheus data source.

  6. Click Import. A Prometheus dashboard is imported.

View component metrics

Click New dashboard in the top menu and choose the dashboard you want to view.

view dashboard

You can get the following metrics for cluster components:

  • TiDB server:

    • Query processing time to monitor the latency and throughput
    • The DDL process monitoring
    • TiKV client related monitoring
    • PD client related monitoring
  • PD server:

    • The total number of times that the command executes
    • The total number of times that a certain command fails
    • The duration that a command succeeds
    • The duration that a command fails
    • The duration that a command finishes and returns result
  • TiKV server:

    • Garbage Collection (GC) monitoring
    • The total number of times that the TiKV command executes
    • The duration that Scheduler executes commands
    • The total number of times of the Raft propose command
    • The duration that Raft executes commands
    • The total number of times that Raft commands fail
    • The total number of times that Raft processes the ready state