Usage Notes

TPCH Catalog uses the Trino Connector compatibility framework and the TPCH Connector to quickly build TPCH test sets.

TPCH - 图1tip

This feature is supported starting from Doris version 3.0.0.

Compiling the TPCH Connector

JDK 17 is required.

  1. git clone https://github.com/trinodb/trino.git
  2. git checkout 435
  3. cd trino/plugin/trino-tpch
  4. mvn clean install -DskipTest

After compiling, you will find the trino-tpch-435/ directory under trino/plugin/trino-tpch/target/.

You can also directly download the precompiled trino-tpch-435.tar.gz and extract it.

Deploying the TPCH Connector

Place the trino-tpch-435/ directory under the connectors/ directory in the deployment paths of all FE and BE nodes. (If it does not exist, you can create it manually).

  1. ├── bin
  2. ├── conf
  3. ├── connectors
  4. ├── trino-tpch-435
  5. ...

After deployment, it is recommended to restart the FE and BE nodes to ensure the Connector is loaded correctly.

Creating the TPCH Catalog

  1. CREATE CATALOG `tpch` PROPERTIES (
  2. "type" = "trino-connector",
  3. "trino.connector.name" = "tpch",
  4. "trino.tpch.column-naming" = "STANDARD",
  5. "trino.tpch.splits-per-node" = "32"
  6. );

The tpch.splits-per-node property sets the level of concurrency. It is recommended to set it to twice the number of cores per BE node to achieve optimal concurrency and improve data generation efficiency.

When "tpch.column-naming" = "STANDARD", the column names in the TPCH table will start with the abbreviation of the table name, such as l_orderkey, otherwise, it is orderkey.

Using the TPCH Catalog

The TPCH Catalog includes pre-configured TPCH datasets of different scale factors, which can be viewed using the SHOW DATABASES and SHOW TABLES commands.

  1. mysql> SWITCH tpch;
  2. Query OK, 0 rows affected (0.00 sec)
  3. mysql> SHOW DATABASES;
  4. +--------------------+
  5. | Database |
  6. +--------------------+
  7. | information_schema |
  8. | mysql |
  9. | sf1 |
  10. | sf100 |
  11. | sf1000 |
  12. | sf10000 |
  13. | sf100000 |
  14. | sf300 |
  15. | sf3000 |
  16. | sf30000 |
  17. | tiny |
  18. +--------------------+
  19. 11 rows in set (0.00 sec)
  20. mysql> USE sf1;
  21. mysql> SHOW TABLES;
  22. +---------------+
  23. | Tables_in_sf1 |
  24. +---------------+
  25. | customer |
  26. | lineitem |
  27. | nation |
  28. | orders |
  29. | part |
  30. | partsupp |
  31. | region |
  32. | supplier |
  33. +---------------+
  34. 8 rows in set (0.00 sec)

You can directly query these tables using the SELECT statement.

TPCH - 图2tip

The data in these pre-configured datasets is not actually stored but generated in real-time during queries. Therefore, these datasets are not suitable for direct benchmarking. They are more appropriate for writing to other target tables (such as Doris internal tables, Hive, Iceberg, and other data sources supported by Doris) via INSERT INTO SELECT, after which performance tests can be conducted on the target tables.

Best Practices

Quickly Build TPCH Test Dataset

You can quickly build a TPCH test dataset using the CTAS (Create Table As Select) statement:

  1. CREATE TABLE hive.tpch100.customer PROPERTIES("file_format" = "parquet") AS SELECT * FROM tpch.sf100.customer ;
  2. CREATE TABLE hive.tpch100.lineitem PROPERTIES("file_format" = "parquet") AS SELECT * FROM tpch.sf100.lineitem ;
  3. CREATE TABLE hive.tpch100.nation PROPERTIES("file_format" = "parquet") AS SELECT * FROM tpch.sf100.nation ;
  4. CREATE TABLE hive.tpch100.orders PROPERTIES("file_format" = "parquet") AS SELECT * FROM tpch.sf100.orders ;
  5. CREATE TABLE hive.tpch100.part PROPERTIES("file_format" = "parquet") AS SELECT * FROM tpch.sf100.part ;
  6. CREATE TABLE hive.tpch100.partsupp PROPERTIES("file_format" = "parquet") AS SELECT * FROM tpch.sf100.partsupp ;
  7. CREATE TABLE hive.tpch100.region PROPERTIES("file_format" = "parquet") AS SELECT * FROM tpch.sf100.region ;
  8. CREATE TABLE hive.tpch100.supplier PROPERTIES("file_format" = "parquet") AS SELECT * FROM tpch.sf100.supplier ;

TPCH - 图3tip

On a Doris cluster with 3 BE nodes, each with 16 cores, creating a TPCH 1000 dataset in Hive takes approximately 25 minutes, and TPCH 10000 takes about 4 to 5 hours.