TiCDC 部署拓扑

TiCDC 部署拓扑 - 图1

注意

TiCDC 从 v4.0.6 起成为正式功能,可用于生产环境。

本文介绍 TiCDC 部署的拓扑,以及如何在最小拓扑的基础上同时部署 TiCDC。TiCDC 是 4.0 版本开始支持的 TiDB 增量数据同步工具,支持多种下游(TiDB、MySQL、Kafka、MQ、存储服务等)。相比于 TiDB Binlog,TiCDC 有延迟更低、天然高可用等优点。

拓扑信息

实例个数物理机配置IP配置
TiDB316 VCore 32GB 110.0.1.1
10.0.1.2
10.0.1.3
默认端口
全局目录配置
PD34 VCore 8GB 110.0.1.4
10.0.1.5
10.0.1.6
默认端口
全局目录配置
TiKV316 VCore 32GB 2TB (nvme ssd) 110.0.1.7
10.0.1.8
10.0.1.9
默认端口
全局目录配置
CDC38 VCore 16GB 110.0.1.11
10.0.1.12
10.0.1.13
默认端口
全局目录配置
Monitoring & Grafana14 VCore 8GB * 1 500GB (ssd)10.0.1.11默认端口
全局目录配置

拓扑模版

简单 TiCDC 配置模板

  1. # # Global variables are applied to all deployments and used as the default value of
  2. # # the deployments if a specific deployment value is missing.
  3. global:
  4. user: "tidb"
  5. ssh_port: 22
  6. deploy_dir: "/tidb-deploy"
  7. data_dir: "/tidb-data"
  8. pd_servers:
  9. - host: 10.0.1.4
  10. - host: 10.0.1.5
  11. - host: 10.0.1.6
  12. tidb_servers:
  13. - host: 10.0.1.1
  14. - host: 10.0.1.2
  15. - host: 10.0.1.3
  16. tikv_servers:
  17. - host: 10.0.1.7
  18. - host: 10.0.1.8
  19. - host: 10.0.1.9
  20. cdc_servers:
  21. - host: 10.0.1.7
  22. - host: 10.0.1.8
  23. - host: 10.0.1.9
  24. monitoring_servers:
  25. - host: 10.0.1.10
  26. grafana_servers:
  27. - host: 10.0.1.10
  28. alertmanager_servers:
  29. - host: 10.0.1.10
  30. ``` 详细 TiCDC 配置模板

# Global variables are applied to all deployments and used as the default value of

# the deployments if a specific deployment value is missing.

global: user: “tidb” ssh_port: 22 deploy_dir: “/tidb-deploy” data_dir: “/tidb-data”

# Monitored variables are applied to all the machines.

monitored: node_exporter_port: 9100 blackbox_exporter_port: 9115

deploy_dir: “/tidb-deploy/monitored-9100”

data_dir: “/tidb-data/monitored-9100”

log_dir: “/tidb-deploy/monitored-9100/log”

# Server configs are used to specify the runtime configuration of TiDB components.

# All configuration items can be found in TiDB docs:

# - TiDB: https://docs.pingcap.com/zh/tidb/stable/tidb-configuration-file

# - TiKV: https://docs.pingcap.com/zh/tidb/stable/tikv-configuration-file

# - PD: https://docs.pingcap.com/zh/tidb/stable/pd-configuration-file

# All configuration items use points to represent the hierarchy, e.g:

# readpool.storage.use-unified-pool

#

# You can overwrite this configuration via the instance-level config field.

server_configs: tidb: log.slow-threshold: 300 tikv:

  1. # server.grpc-concurrency: 4
  2. # raftstore.apply-pool-size: 2
  3. # raftstore.store-pool-size: 2
  4. # rocksdb.max-sub-compactions: 1
  5. # storage.block-cache.capacity: "16GB"
  6. # readpool.unified.max-thread-count: 12
  7. readpool.storage.use-unified-pool: false
  8. readpool.coprocessor.use-unified-pool: true

pd: schedule.leader-schedule-limit: 4 schedule.region-schedule-limit: 2048 schedule.replica-schedule-limit: 64 cdc:

  1. # capture-session-ttl: 10
  2. # sorter.sort-dir: "/tmp/cdc_sort"
  3. # gc-ttl: 86400

pd_servers:

  • host: 10.0.1.4

    ssh_port: 22

    name: “pd-1”

    client_port: 2379

    peer_port: 2380

    deploy_dir: “/tidb-deploy/pd-2379”

    data_dir: “/tidb-data/pd-2379”

    log_dir: “/tidb-deploy/pd-2379/log”

    numa_node: “0,1”

    # The following configs are used to overwrite the server_configs.pd values.

    config:

    schedule.max-merge-region-size: 20

    schedule.max-merge-region-keys: 200000

  • host: 10.0.1.5
  • host: 10.0.1.6

tidb_servers:

  • host: 10.0.1.1

    ssh_port: 22

    port: 4000

    status_port: 10080

    deploy_dir: “/tidb-deploy/tidb-4000”

    log_dir: “/tidb-deploy/tidb-4000/log”

    numa_node: “0,1”

    # The following configs are used to overwrite the server_configs.tidb values.

    config:

    log.slow-query-file: tidb-slow-overwrited.log

  • host: 10.0.1.2
  • host: 10.0.1.3

tikv_servers:

  • host: 10.0.1.7

    ssh_port: 22

    port: 20160

    status_port: 20180

    deploy_dir: “/tidb-deploy/tikv-20160”

    data_dir: “/tidb-data/tikv-20160”

    log_dir: “/tidb-deploy/tikv-20160/log”

    numa_node: “0,1”

    # The following configs are used to overwrite the server_configs.tikv values.

    config:

    server.grpc-concurrency: 4

    server.labels: { zone: “zone1”, dc: “dc1”, host: “host1” }

  • host: 10.0.1.8

  • host: 10.0.1.9

cdc_servers:

  • host: 10.0.1.1 port: 8300 deploy_dir: “/tidb-deploy/cdc-8300” data_dir: “/tidb-data/cdc-8300” log_dir: “/tidb-deploy/cdc-8300/log”

    gc-ttl: 86400

    ticdc_cluster_id: “cluster1”

  • host: 10.0.1.2 port: 8300 deploy_dir: “/tidb-deploy/cdc-8300” data_dir: “/tidb-data/cdc-8300” log_dir: “/tidb-deploy/cdc-8300/log”

    gc-ttl: 86400

    ticdc_cluster_id: “cluster1”

  • host: 10.0.1.3 port: 8300 deploy_dir: “/tidb-deploy/cdc-8300” data_dir: “/tidb-data/cdc-8300” log_dir: “/tidb-deploy/cdc-8300/log”

    gc-ttl: 86400

    ticdc_cluster_id: “cluster2”

monitoring_servers:

  • host: 10.0.1.10

    ssh_port: 22

    port: 9090

    deploy_dir: “/tidb-deploy/prometheus-8249”

    data_dir: “/tidb-data/prometheus-8249”

    log_dir: “/tidb-deploy/prometheus-8249/log”

grafana_servers:

  • host: 10.0.1.10

    port: 3000

    deploy_dir: /tidb-deploy/grafana-3000

alertmanager_servers:

  • host: 10.0.1.10

    ssh_port: 22

    web_port: 9093

    cluster_port: 9094

    deploy_dir: “/tidb-deploy/alertmanager-9093”

    data_dir: “/tidb-data/alertmanager-9093”

    log_dir: “/tidb-deploy/alertmanager-9093/log”

    ```

以上 TiDB 集群拓扑文件中,详细的配置项说明见通过 TiUP 部署 TiDB 集群的拓扑文件配置

TiCDC 部署拓扑 - 图2

注意

  • 无需手动创建配置文件中的 tidb 用户,TiUP cluster 组件会在目标主机上自动创建该用户。可以自定义用户,也可以和中控机的用户保持一致。
  • 如果部署目录配置为相对路径,会部署在用户的 Home 目录下。