最小拓扑架构

本文档介绍 TiDB 集群最小部署的拓扑架构。

拓扑信息

实例个数物理机配置IP配置
TiDB216 VCore 32 GiB
100 GiB 用于存储
10.0.1.1
10.0.1.2
默认端口
全局目录配置
PD34 VCore 8 GiB
100 GiB 用于存储
10.0.1.4
10.0.1.5
10.0.1.6
默认端口
全局目录配置
TiKV316 VCore 32 GiB
2 TiB (NVMe SSD) 用于存储
10.0.1.7
10.0.1.8
10.0.1.9
默认端口
全局目录配置
Monitoring & Grafana14 VCore 8 GiB
500 GiB (SSD) 用于存储
10.0.1.10默认端口
全局目录配置

拓扑模版

简单最小配置模板

  1. # # Global variables are applied to all deployments and used as the default value of
  2. # # the deployments if a specific deployment value is missing.
  3. global:
  4. user: "tidb"
  5. ssh_port: 22
  6. deploy_dir: "/tidb-deploy"
  7. data_dir: "/tidb-data"
  8. pd_servers:
  9. - host: 10.0.1.4
  10. - host: 10.0.1.5
  11. - host: 10.0.1.6
  12. tidb_servers:
  13. - host: 10.0.1.1
  14. - host: 10.0.1.2
  15. tikv_servers:
  16. - host: 10.0.1.7
  17. - host: 10.0.1.8
  18. - host: 10.0.1.9
  19. monitoring_servers:
  20. - host: 10.0.1.10
  21. grafana_servers:
  22. - host: 10.0.1.10
  23. alertmanager_servers:
  24. - host: 10.0.1.10
  25. ``` 详细最小配置模板

# Global variables are applied to all deployments and used as the default value of

# the deployments if a specific deployment value is missing.

global: user: “tidb” ssh_port: 22 deploy_dir: “/tidb-deploy” data_dir: “/tidb-data”

# Monitored variables are applied to all the machines.

monitored: node_exporter_port: 9100 blackbox_exporter_port: 9115

deploy_dir: “/tidb-deploy/monitored-9100”

data_dir: “/tidb-data/monitored-9100”

log_dir: “/tidb-deploy/monitored-9100/log”

# Server configs are used to specify the runtime configuration of TiDB components.

# All configuration items can be found in TiDB docs:

# - TiDB: https://docs.pingcap.com/zh/tidb/stable/tidb-configuration-file

# - TiKV: https://docs.pingcap.com/zh/tidb/stable/tikv-configuration-file

# - PD: https://docs.pingcap.com/zh/tidb/stable/pd-configuration-file

# All configuration items use points to represent the hierarchy, e.g:

# readpool.storage.use-unified-pool

#

# You can overwrite this configuration via the instance-level config field.

server_configs: tidb: log.slow-threshold: 300 binlog.enable: false binlog.ignore-error: false tikv:

  1. # server.grpc-concurrency: 4
  2. # raftstore.apply-pool-size: 2
  3. # raftstore.store-pool-size: 2
  4. # rocksdb.max-sub-compactions: 1
  5. # storage.block-cache.capacity: "16GB"
  6. # readpool.unified.max-thread-count: 12
  7. readpool.storage.use-unified-pool: false
  8. readpool.coprocessor.use-unified-pool: true

pd: schedule.leader-schedule-limit: 4 schedule.region-schedule-limit: 2048 schedule.replica-schedule-limit: 64

pd_servers:

  • host: 10.0.1.4

    ssh_port: 22

    name: “pd-1”

    client_port: 2379

    peer_port: 2380

    deploy_dir: “/tidb-deploy/pd-2379”

    data_dir: “/tidb-data/pd-2379”

    log_dir: “/tidb-deploy/pd-2379/log”

    numa_node: “0,1”

    # The following configs are used to overwrite the server_configs.pd values.

    config:

    schedule.max-merge-region-size: 20

    schedule.max-merge-region-keys: 200000

  • host: 10.0.1.5
  • host: 10.0.1.6

tidb_servers:

  • host: 10.0.1.1

    ssh_port: 22

    port: 4000

    status_port: 10080

    deploy_dir: “/tidb-deploy/tidb-4000”

    log_dir: “/tidb-deploy/tidb-4000/log”

    numa_node: “0,1”

    # The following configs are used to overwrite the server_configs.tidb values.

    config:

    log.slow-query-file: tidb-slow-overwrited.log

  • host: 10.0.1.2

tikv_servers:

  • host: 10.0.1.7

    ssh_port: 22

    port: 20160

    status_port: 20180

    deploy_dir: “/tidb-deploy/tikv-20160”

    data_dir: “/tidb-data/tikv-20160”

    log_dir: “/tidb-deploy/tikv-20160/log”

    numa_node: “0,1”

    # The following configs are used to overwrite the server_configs.tikv values.

    config:

    server.grpc-concurrency: 4

    server.labels: { zone: “zone1”, dc: “dc1”, host: “host1” }

  • host: 10.0.1.8
  • host: 10.0.1.9

monitoring_servers:

  • host: 10.0.1.10

    ssh_port: 22

    port: 9090

    deploy_dir: “/tidb-deploy/prometheus-8249”

    data_dir: “/tidb-data/prometheus-8249”

    log_dir: “/tidb-deploy/prometheus-8249/log”

grafana_servers:

  • host: 10.0.1.10

    port: 3000

    deploy_dir: /tidb-deploy/grafana-3000

alertmanager_servers:

  • host: 10.0.1.10

    ssh_port: 22

    web_port: 9093

    cluster_port: 9094

    deploy_dir: “/tidb-deploy/alertmanager-9093”

    data_dir: “/tidb-data/alertmanager-9093”

    log_dir: “/tidb-deploy/alertmanager-9093/log”

    ```

以上 TiDB 集群拓扑文件中,详细的配置项说明见通过 TiUP 部署 TiDB 集群的拓扑文件配置

最小部署拓扑结构 - 图1

注意

  • 无需手动创建配置文件中的 tidb 用户,TiUP cluster 组件会在目标主机上自动创建该用户。可以自定义用户,也可以和中控机的用户保持一致。
  • 如果部署目录配置为相对路径,会部署在用户的 Home 目录下。