Migrate and Merge MySQL Shards of Small Datasets to TiDB

If you want to migrate and merge multiple MySQL database instances upstream to one TiDB database downstream, and the amount of data is not too large, you can use DM to migrate MySQL shards. “Small datasets” in this document usually mean data around or less than one TiB. Through examples in this document, you can learn the operation steps, precautions, and troubleshooting of the migration.

This document applies to migrating MySQL shards less than 1 TiB in total. If you want to migrate MySQL shards with a total of more than 1 TiB of data, it will take a long time to migrate only using DM. In this case, it is recommended that you follow the operation introduced in Migrate and Merge MySQL Shards of Large Datasets to TiDB to perform migration.

This document takes a simple example to illustrate the migration procedure. The MySQL shards of the two data source MySQL instances in the example are migrated to the downstream TiDB cluster.

In this example, both MySQL Instance 1 and MySQL Instance 2 contain the following schemas and tables. In this example, you migrate and merge tables from store_01 and store_02 schemas with a sale prefix in both instances, into the downstream sale table in the store schema.

SchemaTable
store_01sale_01, sale_02
store_02sale_01, sale_02

Target schemas and tables:

SchemaTable
storesale

Prerequisites

Before starting the migration, make sure you have completed the following tasks:

Check conflicts for the sharded tables

If the migration involves merging data from different sharded tables, primary key or unique index conflicts may occur during the merge. Therefore, before migration, you need to take a deep look at the current sharding scheme from the business point of view, and find a way to avoid the conflicts. For more details, see Handle conflicts between primary keys or unique indexes across multiple sharded tables. The following is a brief description.

In this example, sale_01 and sale_02 have the same table structure as follows

  1. CREATE TABLE `sale_01` (
  2. `id` bigint(20) NOT NULL AUTO_INCREMENT,
  3. `sid` bigint(20) NOT NULL,
  4. `pid` bigint(20) NOT NULL,
  5. `comment` varchar(255) DEFAULT NULL,
  6. PRIMARY KEY (`id`),
  7. UNIQUE KEY `sid` (`sid`)
  8. ) ENGINE=InnoDB DEFAULT CHARSET=latin1

The id column is the primary key, and the sid column is the sharding key. The id column is auto-incremental, and duplicated multiple sharded table ranges will cause data conflicts. The sid can ensure that the index is globally unique, so you can follow the steps in Remove the primary key attribute of the auto-increment primary key to bypasses the id column.

  1. CREATE TABLE `sale` (
  2. `id` bigint(20) NOT NULL,
  3. `sid` bigint(20) NOT NULL,
  4. `pid` bigint(20) NOT NULL,
  5. `comment` varchar(255) DEFAULT NULL,
  6. INDEX (`id`),
  7. UNIQUE KEY `sid` (`sid`)
  8. ) ENGINE=InnoDB DEFAULT CHARSET=latin1

Step 1. Load data sources

Create a new data source file called source1.yaml, which configures an upstream data source into DM, and add the following content:

  1. # Configuration.
  2. source-id: "mysql-01" # Must be unique.
  3. # Specifies whether DM-worker pulls binlogs with GTID (Global Transaction Identifier).
  4. # The prerequisite is that you have already enabled GTID in the upstream MySQL.
  5. # If you have configured the upstream database service to switch master between different nodes automatically, you must enable GTID.
  6. enable-gtid: true
  7. from:
  8. host: "${host}" # For example: 172.16.10.81
  9. user: "root"
  10. password: "${password}" # Plaintext passwords are supported but not recommended. It is recommended that you use dmctl encrypt to encrypt plaintext passwords.
  11. port: ${port} # For example: 3306

Run the following command in a terminal. Use tiup dmctl to load the data source configuration into the DM cluster:

  1. tiup dmctl --master-addr ${advertise-addr} operate-source create source1.yaml

The parameters are described as follows.

ParameterDescription
—master-addr{advertise-addr} of any DM-master node in the cluster that dmctl connects to. For example: 172.16.10.71:8261
operate-source createLoad data sources to the DM clusters.

Repeat the above steps until all data sources are added to the DM cluster.

Step 2. Configure the migration task

Create a task configuration file named task1.yaml and writes the following content to it:

  1. name: "shard_merge" # The name of the task. Should be globally unique.
  2. # Task mode. You can set it to the following:
  3. # - full: Performs only full data migration (incremental replication is skipped)
  4. # - incremental: Only performs real-time incremental replication using binlog. (full data migration is skipped)
  5. # - all: Performs both full data migration and incremental replication. For migrating small to medium amount of data here, use this option.
  6. task-mode: all
  7. # Required for the MySQL shards. By default, the "pessimistic" mode is used.
  8. # If you have a deep understanding of the principles and usage limitations of the optimistic mode, you can also use the "optimistic" mode.
  9. # For more information, see [Merge and Migrate Data from Sharded Tables](https://docs.pingcap.com/tidb/v7.1/feature-shard-merge/)
  10. shard-mode: "pessimistic"
  11. meta-schema: "dm_meta" # A schema will be created in the downstream database to store the metadata
  12. ignore-checking-items: ["auto_increment_ID"] # In this example, there are auto-incremental primary keys upstream, so you do not need to check this item.
  13. target-database:
  14. host: "${host}" # For example: 192.168.0.1
  15. port: 4000
  16. user: "root"
  17. password: "${password}" # Plaintext passwords are supported but not recommended. It is recommended that you use dmctl encrypt to encrypt plaintext passwords.
  18. mysql-instances:
  19. -
  20. source-id: "mysql-01" # ID of the data source, which is source-id in source1.yaml
  21. route-rules: ["sale-route-rule"] # Table route rules applied to the data source
  22. filter-rules: ["store-filter-rule", "sale-filter-rule"] # Binlog event filter rules applied to the data source
  23. block-allow-list: "log-bak-ignored" # Block & Allow Lists rules applied to the data source
  24. -
  25. source-id: "mysql-02"
  26. route-rules: ["sale-route-rule"]
  27. filter-rules: ["store-filter-rule", "sale-filter-rule"]
  28. block-allow-list: "log-bak-ignored"
  29. # Configurations for merging MySQL shards
  30. routes:
  31. sale-route-rule:
  32. schema-pattern: "store_*" # Merge schemas store_01 and store_02 to the store schema in the downstream
  33. table-pattern: "sale_*" # Merge tables sale_01 and sale_02 of schemas store_01 and store_02 to the sale table in the downstream
  34. target-schema: "store"
  35. target-table: "sale"
  36. # Optional. Used for extracting the source information of sharded schemas and tables and writing the information to the user-defined columns in the downstream. If these options are configured, you need to manually create a merged table in the downstream. For details, see the following table routing setting.
  37. # extract-table: # Extracts and writes the table name suffix without the sale_ part to the c-table column of the merged table. For example, 01 is extracted and written to the c-table column for the sharded table sale_01.
  38. # table-regexp: "sale_(.*)"
  39. # target-column: "c_table"
  40. # extract-schema: # Extracts and writes the schema name suffix without the store_ part to the c_schema column of the merged table. For example, 02 is extracted and written to the c_schema column for the sharded schema store_02.
  41. # schema-regexp: "store_(.*)"
  42. # target-column: "c_schema"
  43. # extract-source: # Extracts and writes the source instance information to the c_source column of the merged table. For example, mysql-01 is extracted and written to the c_source column for the data source mysql-01.
  44. # source-regexp: "(.*)"
  45. # target-column: "c_source"
  46. # Filters out some DDL events.
  47. filters:
  48. sale-filter-rule: # Filter name.
  49. schema-pattern: "store_*" # The binlog events or DDL SQL statements of upstream MySQL instance schemas that match schema-pattern are filtered by the rules below.
  50. table-pattern: "sale_*" # The binlog events or DDL SQL statements of upstream MySQL instance tables that match table-pattern are filtered by the rules below.
  51. events: ["truncate table", "drop table", "delete"] # The binlog event array.
  52. action: Ignore # The string (`Do`/`Ignore`). `Do` is the allow list. `Ignore` is the block list.
  53. store-filter-rule:
  54. schema-pattern: "store_*"
  55. events: ["drop database"]
  56. action: Ignore
  57. # Block and allow list
  58. block-allow-list: # filter or only migrate all operations of some databases or some tables.
  59. log-bak-ignored: # Rule name.
  60. do-dbs: ["store_*"] # The allow list of the schemas to be migrated, similar to replicate-do-db in MySQL.

The above example is the minimum configuration to perform the migration task. For more information, see DM Advanced Task Configuration File.

For more information on routes, filters and other configurations in the task file, see the following documents:

Step 3. Start the task

Before starting a migration task, run the check-task subcommand in tiup dmctl to check whether the configuration meets the requirements of DM so as to avoid possible errors.

  1. tiup dmctl --master-addr ${advertise-addr} check-task task.yaml

Run the following command in tiup dmctl to start a migration task:

  1. tiup dmctl --master-addr ${advertise-addr} start-task task.yaml
ParameterDescription
—master-addr{advertise-addr} of any DM-master node in the cluster that dmctl connects to. For example: 172.16.10.71:8261
start-taskStarts the data migration task.

If the migration task fails to start, modify the configuration information according to the error information, and then run start-task task.yaml again to start the migration task. If you encounter problems, see Handle Errors and FAQ.

Step 4. Check the task

After starting the migration task, you can use dmtcl tiup to run query-status to view the status of the task.

  1. tiup dmctl --master-addr ${advertise-addr} query-status ${task-name}

If you encounter errors, use query-status ${task-name} to view more detailed information. For details about the query results, task status and sub task status of the query-status command, see TiDB Data Migration Query Status.

Step 5. Monitor tasks and check logs (optional)

You can view the history of a migration task and internal operational metrics through Grafana or logs.

  • Via Grafana

    If Prometheus, Alertmanager, and Grafana are correctly deployed when you deploy the DM cluster using TiUP, you can view DM monitoring metrics in Grafana. Specifically, enter the IP address and port specified during deployment in Grafana and select the DM dashboard.

  • Via logs

    When DM is running, DM-master, DM-worker, and dmctl output logs, which includes information about migration tasks. The log directory of each component is as follows.

    • DM-master log directory: It is specified by the DM-master process parameter --log-file. If DM is deployed using TiUP, the log directory is /dm-deploy/dm-master-8261/log/.
    • DM-worker log directory: It is specified by the DM-worker process parameter --log-file. If DM is deployed using TiUP, the log directory is /dm-deploy/dm-worker-8262/log/.

See also