Software and Hardware Requirements for TiDB Data Migration

TiDB Data Migration (DM) supports mainstream Linux operating systems. See the following table for specific version requirements:

Linux OSVersion
Red Hat Enterprise Linux7.3 or later
CentOS7.3 or later
Oracle Enterprise Linux7.3 or later
Ubuntu LTS16.04 or later

DM can be deployed and run on Intel architecture servers and mainstream virtualization environments.

DM can be deployed and run on a 64-bit generic hardware server platform (Intel x86-64 architecture). For servers used in the development, testing, and production environments, this section illustrates recommended hardware configurations (these do not include the resources used by the operating system).

Development and test environments

ComponentCPUMemoryLocal StorageNetworkNumber of Instances (Minimum Requirement)
DM-master4 core+8 GB+SAS, 200 GB+Gigabit network card1
DM-worker8 core+16 GB+SAS, 200 GB+ (Greater than the size of the migrated data)Gigabit network cardThe number of upstream MySQL instances

Hardware and Software Requirements - 图1

Note

  • In the test environment, DM-master and DM-worker used for functional verification can be deployed on the same server.
  • To prevent interference with the accuracy of the performance test results, it is not recommended to use low-performance storage and network hardware configurations.
  • If you need to verify the function only, you can deploy a DM-master on a single machine. The number of DM-worker deployed must be greater than or equal to the number of upstream MySQL instances. To ensure high availability, it is recommended to deploy more DM-workers.
  • DM-worker stores full data in the dump and load phases. Therefore, the disk space for DM-worker needs to be greater than the total amount of data to be migrated. If the relay log is enabled for the migration task, DM-worker needs additional disk space to store upstream binlog data.

Production environment

ComponentCPUMemoryHard Disk TypeNetworkNumber of Instances (Minimum Requirement)
DM-master4 core+8 GB+SAS, 200 GB+Gigabit network card3
DM-worker16 core+32 GB+SSD, 200 GB+ (Greater than the size of the migrated data)10 Gigabit network cardGreater than the number of upstream MySQL instances
Monitor8 core+16 GB+SAS, 200 GB+Gigabit network card1

Hardware and Software Requirements - 图2

Note

  • In the production environment, it is not recommended to deploy and run DM-master and DM-worker on the same server, because when DM-worker writes data to disks, it might interfere with the use of disks by DM-master’s high availability component.
  • If a performance issue occurs, you are recommended to modify the task configuration file according to the Optimize Configuration of DM document. If the performance is not effectively optimized by tuning the configuration file, you can try to upgrade the hardware of your server.

Downstream storage space requirements

The target TiKV cluster must have enough disk space to store the imported data. In addition to the standard hardware requirements, the storage space of the target TiKV cluster must be larger than the size of the data source x the number of replicas x 2. For example, if the cluster uses 3 replicas by default, the target TiKV cluster must have a storage space larger than 6 times the size of the data source. The formula has x 2 because:

  • Indexes might take extra space.
  • RocksDB has a space amplification effect.

You can estimate the data volume by using the following SQL statements to summarize the DATA_LENGTH field:

  1. -- Calculate the size of all schemas
  2. SELECT
  3. TABLE_SCHEMA,
  4. FORMAT_BYTES(SUM(DATA_LENGTH)) AS 'Data Size',
  5. FORMAT_BYTES(SUM(INDEX_LENGTH)) 'Index Size'
  6. FROM
  7. information_schema.tables
  8. GROUP BY
  9. TABLE_SCHEMA;
  10. -- Calculate the 5 largest tables
  11. SELECT
  12. TABLE_NAME,
  13. TABLE_SCHEMA,
  14. FORMAT_BYTES(SUM(data_length)) AS 'Data Size',
  15. FORMAT_BYTES(SUM(index_length)) AS 'Index Size',
  16. FORMAT_BYTES(SUM(data_length+index_length)) AS 'Total Size'
  17. FROM
  18. information_schema.tables
  19. GROUP BY
  20. TABLE_NAME,
  21. TABLE_SCHEMA
  22. ORDER BY
  23. SUM(DATA_LENGTH+INDEX_LENGTH) DESC
  24. LIMIT
  25. 5;