TiDB Binlog Cluster Deployment

This document describes how to deploy TiDB Binlog using a Binary package.

Hardware requirements

Pump and Drainer are deployed and operate on 64-bit universal hardware server platforms with Intel x86-64 architecture.

In environments of development, testing and production, the requirements on server hardware are as follows:

ServiceThe Number of ServersCPUDiskMemory
Pump38 core+SSD, 200 GB+16G
Drainer18 core+SAS, 100 GB+ (If binlogs are output as local files, the disk size depends on how long these files are retained.)16G

Deploy TiDB Binlog using TiUP

It is recommended to deploy TiDB Binlog using TiUP. To do that, when deploying TiDB using TiUP, you need to add the node information of drainer and pump of TiDB Binlog in TiDB Binlog Deployment Topology. For detailed deployment information, refer to Deploy a TiDB Cluster Using TiUP.

Deploy TiDB Binlog using a binary package

Download the official binary package

The binary package of TiDB Binlog is included in the TiDB Toolkit. To download the TiDB Toolkit, see Download TiDB Tools.

The usage example

Assuming that you have three PD nodes, one TiDB node, two Pump nodes, and one Drainer node, the information of each node is as follows:

NodeIP
TiDB192.168.0.10
PD1192.168.0.16
PD2192.168.0.15
PD3192.168.0.14
Pump192.168.0.11
Pump192.168.0.12
Drainer192.168.0.13

The following part shows how to use Pump and Drainer based on the nodes above.

  1. Deploy Pump using the binary.

    • To view the command line parameters of Pump, execute ./pump -help:

      1. Usage of Pump:
      2. -L string
      3. the output information level of logs: debug, info, warn, error, fatal ("info" by default)
      4. -V
      5. the print version information
      6. -addr string
      7. the RPC address through which Pump provides the service (-addr="192.168.0.11:8250")
      8. -advertise-addr string
      9. the RPC address through which Pump provides the external service (-advertise-addr="192.168.0.11:8250")
      10. -config string
      11. the path of the configuration file. If you specify the configuration file, Pump reads the configuration in the configuration file first. If the corresponding configuration also exits in the command line parameters, Pump uses the configuration of the command line parameters to cover that of the configuration file.
      12. -data-dir string
      13. the path where the Pump data is stored
      14. -gc int
      15. the number of days to retain the data in Pump ("7" by default)
      16. -heartbeat-interval int
      17. the interval of the heartbeats Pump sends to PD (in seconds)
      18. -log-file string
      19. the file path of logs
      20. -log-rotate string
      21. the switch frequency of logs (hour/day)
      22. -metrics-addr string
      23. the Prometheus Pushgateway address. If not set, it is forbidden to report the monitoring metrics.
      24. -metrics-interval int
      25. the report frequency of the monitoring metrics ("15" by default, in seconds)
      26. -node-id string
      27. the unique ID of a Pump node. If you do not specify this ID, the system automatically generates an ID based on the host name and listening port.
      28. -pd-urls string
      29. the address of the PD cluster nodes (-pd-urls="http://192.168.0.16:2379,http://192.168.0.15:2379,http://192.168.0.14:2379")
      30. -fake-binlog-interval int
      31. the frequency at which a Pump node generates fake binlog ("3" by default, in seconds)
    • Taking deploying Pump on “192.168.0.11” as an example, the Pump configuration file is as follows:

      1. # Pump Configuration
      2. # the bound address of Pump
      3. addr = "192.168.0.11:8250"
      4. # the address through which Pump provides the service
      5. advertise-addr = "192.168.0.11:8250"
      6. # the number of days to retain the data in Pump ("7" by default)
      7. gc = 7
      8. # the directory where the Pump data is stored
      9. data-dir = "data.pump"
      10. # the interval of the heartbeats Pump sends to PD (in seconds)
      11. heartbeat-interval = 2
      12. # the address of the PD cluster nodes (each separated by a comma with no whitespace)
      13. pd-urls = "http://192.168.0.16:2379,http://192.168.0.15:2379,http://192.168.0.14:2379"
      14. # [security]
      15. # This section is generally commented out if no special security settings are required.
      16. # The file path containing a list of trusted SSL CAs connected to the cluster.
      17. # ssl-ca = "/path/to/ca.pem"
      18. # The path to the X509 certificate in PEM format that is connected to the cluster.
      19. # ssl-cert = "/path/to/drainer.pem"
      20. # The path to the X509 key in PEM format that is connected to the cluster.
      21. # ssl-key = "/path/to/drainer-key.pem"
      22. # [storage]
      23. # Set to true (by default) to guarantee reliability by ensuring binlog data is flushed to the disk
      24. # sync-log = true
      25. # When the available disk capacity is less than the set value, Pump stops writing data.
      26. # 42 MB -> 42000000, 42 mib -> 44040192
      27. # default: 10 gib
      28. # stop-write-at-available-space = "10 gib"
      29. # The LSM DB settings embedded in Pump. Unless you know this part well, it is usually commented out.
      30. # [storage.kv]
      31. # block-cache-capacity = 8388608
      32. # block-restart-interval = 16
      33. # block-size = 4096
      34. # compaction-L0-trigger = 8
      35. # compaction-table-size = 67108864
      36. # compaction-total-size = 536870912
      37. # compaction-total-size-multiplier = 8.0
      38. # write-buffer = 67108864
      39. # write-L0-pause-trigger = 24
      40. # write-L0-slowdown-trigger = 17
    • The example of starting Pump:

      1. ./pump -config pump.toml

      If the command line parameters is the same with the configuration file parameters, the values of command line parameters are used.

  2. Deploy Drainer using binary.

    • To view the command line parameters of Drainer, execute ./drainer -help:

      1. Usage of Drainer:
      2. -L string
      3. the output information level of logs: debug, info, warn, error, fatal ("info" by default)
      4. -V
      5. the print version information
      6. -addr string
      7. the address through which Drainer provides the service (-addr="192.168.0.13:8249")
      8. -c int
      9. the number of the concurrency of the downstream for replication. The bigger the value, the better throughput performance of the concurrency ("1" by default).
      10. -cache-binlog-count int
      11. the limit on the number of binlog items in the cache ("8" by default)
      12. If a large single binlog item in the upstream causes OOM in Drainer, try to lower the value of this parameter to reduce memory usage.
      13. -config string
      14. the directory of the configuration file. Drainer reads the configuration file first.
      15. If the corresponding configuration exists in the command line parameters, Drainer uses the configuration of the command line parameters to cover that of the configuration file.
      16. -data-dir string
      17. the directory where the Drainer data is stored ("data.drainer" by default)
      18. -dest-db-type string
      19. the downstream service type of Drainer
      20. The value can be "mysql", "tidb", "kafka", and "file". ("mysql" by default)
      21. -detect-interval int
      22. the interval of checking the online Pump in PD ("10" by default, in seconds)
      23. -disable-detect
      24. whether to disable the conflict monitoring
      25. -disable-dispatch
      26. whether to disable the SQL feature of splitting a single binlog file. If it is set to "true", each binlog file is restored to a single transaction for replication based on the order of binlogs.
      27. It is set to "False", when the downstream is MySQL.
      28. -ignore-schemas string
      29. the db filter list ("INFORMATION_SCHEMA,PERFORMANCE_SCHEMA,mysql,test" by default)
      30. It does not support the Rename DDL operation on tables of `ignore schemas`.
      31. -initial-commit-ts
      32. If Drainer does not have the related breakpoint information, you can configure the related breakpoint information using this parameter. ("-1" by default)
      33. If the value of this parameter is `-1`, Drainer automatically obtains the latest timestamp from PD.
      34. -log-file string
      35. the path of the log file
      36. -log-rotate string
      37. the switch frequency of log files, hour/day
      38. -metrics-addr string
      39. the Prometheus Pushgateway address
      40. It it is not set, the monitoring metrics are not reported.
      41. -metrics-interval int
      42. the report frequency of the monitoring metrics ("15" by default, in seconds)
      43. -node-id string
      44. the unique ID of a Drainer node. If you do not specify this ID, the system automatically generates an ID based on the host name and listening port.
      45. -pd-urls string
      46. the address of the PD cluster nodes (-pd-urls="http://192.168.0.16:2379,http://192.168.0.15:2379,http://192.168.0.14:2379")
      47. -safe-mode
      48. Whether to enable safe mode so that data can be written into the downstream MySQL/TiDB repeatedly.
      49. This mode replaces the `INSERT` statement with the `REPLACE` statement and splits the `UPDATE` statement into `DELETE` plus `REPLACE`.
      50. -txn-batch int
      51. the number of SQL statements of a transaction which are output to the downstream database ("1" by default)
    • Taking deploying Drainer on “192.168.0.13” as an example, the Drainer configuration file is as follows:

      1. # Drainer Configuration.
      2. # the address through which Drainer provides the service ("192.168.0.13:8249")
      3. addr = "192.168.0.13:8249"
      4. # the address through which Drainer provides the external service
      5. advertise-addr = "192.168.0.13:8249"
      6. # the interval of checking the online Pump in PD ("10" by default, in seconds)
      7. detect-interval = 10
      8. # the directory where the Drainer data is stored "data.drainer" by default)
      9. data-dir = "data.drainer"
      10. # the address of the PD cluster nodes (each separated by a comma with no whitespace)
      11. pd-urls = "http://192.168.0.16:2379,http://192.168.0.15:2379,http://192.168.0.14:2379"
      12. # the directory of the log file
      13. log-file = "drainer.log"
      14. # Drainer compresses the data when it gets the binlog from Pump. The value can be "gzip". If it is not configured, it will not be compressed
      15. # compressor = "gzip"
      16. # [security]
      17. # This section is generally commented out if no special security settings are required.
      18. # The file path containing a list of trusted SSL CAs connected to the cluster.
      19. # ssl-ca = "/path/to/ca.pem"
      20. # The path to the X509 certificate in PEM format that is connected to the cluster.
      21. # ssl-cert = "/path/to/pump.pem"
      22. # The path to the X509 key in PEM format that is connected to the cluster.
      23. # ssl-key = "/path/to/pump-key.pem"
      24. # Syncer Configuration
      25. [syncer]
      26. # If the item is set, the sql-mode will be used to parse the DDL statement.
      27. # If the downstream database is MySQL or TiDB, then the downstream sql-mode
      28. # is also set to this value.
      29. # sql-mode = "STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION"
      30. # the number of SQL statements of a transaction that are output to the downstream database ("20" by default)
      31. txn-batch = 20
      32. # the number of the concurrency of the downstream for replication. The bigger the value,
      33. # the better throughput performance of the concurrency ("16" by default)
      34. worker-count = 16
      35. # whether to disable the SQL feature of splitting a single binlog file. If it is set to "true",
      36. # each binlog file is restored to a single transaction for replication based on the order of binlogs.
      37. # If the downstream service is MySQL, set it to "False".
      38. disable-dispatch = false
      39. # In safe mode, data can be written into the downstream MySQL/TiDB repeatedly.
      40. # This mode replaces the `INSERT` statement with the `REPLACE` statement and replaces the `UPDATE` statement with `DELETE` plus `REPLACE` statements.
      41. safe-mode = false
      42. # the downstream service type of Drainer ("mysql" by default)
      43. # Valid value: "mysql", "tidb", "file", and "kafka".
      44. db-type = "mysql"
      45. # If `commit ts` of the transaction is in the list, the transaction is filtered and not replicated to the downstream.
      46. ignore-txn-commit-ts = []
      47. # the db filter list ("INFORMATION_SCHEMA,PERFORMANCE_SCHEMA,mysql,test" by default)
      48. # Does not support the Rename DDL operation on tables of `ignore schemas`.
      49. ignore-schemas = "INFORMATION_SCHEMA,PERFORMANCE_SCHEMA,mysql"
      50. # `replicate-do-db` has priority over `replicate-do-table`. When they have the same `db` name,
      51. # regular expressions are supported for configuration.
      52. # The regular expression should start with "~".
      53. # replicate-do-db = ["~^b.*","s1"]
      54. # [syncer.relay]
      55. # It saves the directory of the relay log. The relay log is not enabled if the value is empty.
      56. # The configuration only comes to effect if the downstream is TiDB or MySQL.
      57. # log-dir = ""
      58. # the maximum size of each file
      59. # max-file-size = 10485760
      60. # [[syncer.replicate-do-table]]
      61. # db-name ="test"
      62. # tbl-name = "log"
      63. # [[syncer.replicate-do-table]]
      64. # db-name ="test"
      65. # tbl-name = "~^a.*"
      66. # Ignore the replication of some tables
      67. # [[syncer.ignore-table]]
      68. # db-name = "test"
      69. # tbl-name = "log"
      70. # the server parameters of the downstream database when `db-type` is set to "mysql"
      71. [syncer.to]
      72. host = "192.168.0.13"
      73. user = "root"
      74. # If you do not want to set a cleartext `password` in the configuration file, you can create `encrypted_password` using `./binlogctl -cmd encrypt -text string`.
      75. # When you have created an `encrypted_password` that is not empty, the `password` above will be ignored, because `encrypted_password` and `password` cannot take effect at the same time.
      76. password = ""
      77. encrypted_password = ""
      78. port = 3306
      79. [syncer.to.checkpoint]
      80. # When the checkpoint type is "mysql" or "tidb", this option can be
      81. # enabled to change the database that saves the checkpoint
      82. # schema = "tidb_binlog"
      83. # Currently only the "mysql" and "tidb" checkpoint types are supported
      84. # You can remove the comment tag to control where to save the checkpoint
      85. # The default method of saving the checkpoint for the downstream db-type:
      86. # mysql/tidb -> in the downstream MySQL or TiDB database
      87. # file/kafka -> file in `data-dir`
      88. # type = "mysql"
      89. # host = "127.0.0.1"
      90. # user = "root"
      91. # password = ""
      92. # `encrypted_password` is encrypted using `./binlogctl -cmd encrypt -text string`.
      93. # When `encrypted_password` is not empty, the `password` above will be ignored.
      94. # encrypted_password = ""
      95. # port = 3306
      96. # the directory where the binlog file is stored when `db-type` is set to `file`
      97. # [syncer.to]
      98. # dir = "data.drainer"
      99. # the Kafka configuration when `db-type` is set to "kafka"
      100. # [syncer.to]
      101. # only one of kafka-addrs and zookeeper-addrs is needed. If both are present, the program gives priority
      102. # to the kafka address in zookeeper
      103. # zookeeper-addrs = "127.0.0.1:2181"
      104. # kafka-addrs = "127.0.0.1:9092"
      105. # kafka-version = "0.8.2.0"
      106. # The maximum number of messages (number of binlogs) in a broker request. If it is left blank or a value smaller than 0 is configured, the default value 1024 is used.
      107. # kafka-max-messages = 1024
      108. # The maximum size of a broker request (unit: byte). The default value is 1 GiB and the maximum value is 2 GiB.
      109. # kafka-max-message-size = 1073741824
      110. # the topic name of the Kafka cluster that saves the binlog data. The default value is <cluster-id>_obinlog.
      111. # To run multiple Drainers to replicate data to the same Kafka cluster, you need to set different `topic-name`s for each Drainer.
      112. # topic-name = ""
    • Starting Drainer:

      Deploy - 图1

      Note

      If the downstream is MySQL/TiDB, to guarantee the data integrity, you need to obtain the initial-commit-ts value and make a full backup of the data and restore the data before the initial start of Drainer.

      When Drainer is started for the first time, use the initial-commit-ts parameter.

      1. ./drainer -config drainer.toml -initial-commit-ts {initial-commit-ts}

      If the command line parameter and the configuration file parameter are the same, the parameter value in the command line is used.

  3. Starting TiDB server:

    • After starting Pump and Drainer, start TiDB server with binlog enabled by adding this section to your config file for TiDB server:

      1. [binlog]
      2. enable=true
    • TiDB server will obtain the addresses of registered Pumps from PD and will stream data to all of them. If there are no registered Pump instances, TiDB server will refuse to start or will block starting until a Pump instance comes online.

Deploy - 图2

Note

  • When TiDB is running, you need to guarantee that at least one Pump is running normally.
  • To enable the TiDB Binlog service in TiDB server, use the -enable-binlog startup parameter in TiDB, or add enable=true to the [binlog] section of the TiDB server configuration file.
  • Make sure that the TiDB Binlog service is enabled in all TiDB instances in a same cluster, otherwise upstream and downstream data inconsistency might occur during data replication. If you want to temporarily run a TiDB instance where the TiDB Binlog service is not enabled, set run_ddl=false in the TiDB configuration file.
  • Drainer does not support the rename DDL operation on the table of ignore schemas (the schemas in the filter list).
  • If you want to start Drainer in an existing TiDB cluster, generally you need to make a full backup of the cluster data, obtain snapshot timestamp, import the data to the target database, and then start Drainer to replicate the incremental data from the corresponding snapshot timestamp.
  • When the downstream database is TiDB or MySQL, ensure that the sql_mode in the upstream and downstream databases are consistent. In other words, the sql_mode should be the same when each SQL statement is executed in the upstream and replicated to the downstream. You can execute the select @@sql_mode; statement in the upstream and downstream respectively to compare sql_mode.
  • When a DDL statement is supported in the upstream but incompatible with the downstream, Drainer fails to replicate data. An example is to replicate the CREATE TABLE t1(a INT) ROW_FORMAT=FIXED; statement when the downstream database MySQL uses the InnoDB engine. In this case, you can configure skipping transactions in Drainer, and manually execute compatible statements in the downstream database.