Action Jars

After the Flink Local Cluster has been started, you can execute the action jar by using the following command.

  1. <FLINK_HOME>/bin/flink run \
  2. /path/to/paimon-flink-action-0.9.0.jar \
  3. <action>
  4. <args>

The following command is used to compact a table.

  1. <FLINK_HOME>/bin/flink run \
  2. /path/to/paimon-flink-action-0.9.0.jar \
  3. compact \
  4. --path <TABLE_PATH>

Merging into table

Paimon supports “MERGE INTO” via submitting the ‘merge_into’ job through flink run.

Important table properties setting:

  1. Only primary key table supports this feature.
  2. The action won’t produce UPDATE_BEFORE, so it’s not recommended to set ‘changelog-producer’ = ‘input’.

The design referenced such syntax:

  1. MERGE INTO target-table
  2. USING source_table | source-expr AS source-alias
  3. ON merge-condition
  4. WHEN MATCHED [AND matched-condition]
  5. THEN UPDATE SET xxx
  6. WHEN MATCHED [AND matched-condition]
  7. THEN DELETE
  8. WHEN NOT MATCHED [AND not_matched_condition]
  9. THEN INSERT VALUES (xxx)
  10. WHEN NOT MATCHED BY SOURCE [AND not-matched-by-source-condition]
  11. THEN UPDATE SET xxx
  12. WHEN NOT MATCHED BY SOURCE [AND not-matched-by-source-condition]
  13. THEN DELETE

The merge_into action use “upsert” semantics instead of “update”, which means if the row exists, then do update, else do insert. For example, for non-primary-key table, you can update every column, but for primary key table, if you want to update primary keys, you have to insert a new row which has different primary keys from rows in the table. In this scenario, “upsert” is useful.

Run the following command to submit a ‘merge_into’ job for the table.

  1. <FLINK_HOME>/bin/flink run \
  2. /path/to/paimon-flink-action-0.9.0.jar \
  3. merge_into \
  4. --warehouse <warehouse-path> \
  5. --database <database-name> \
  6. --table <target-table> \
  7. [--target_as <target-table-alias>] \
  8. --source_table <source_table-name> \
  9. [--source_sql <sql> ...]\
  10. --on <merge-condition> \
  11. --merge_actions <matched-upsert,matched-delete,not-matched-insert,not-matched-by-source-upsert,not-matched-by-source-delete> \
  12. --matched_upsert_condition <matched-condition> \
  13. --matched_upsert_set <upsert-changes> \
  14. --matched_delete_condition <matched-condition> \
  15. --not_matched_insert_condition <not-matched-condition> \
  16. --not_matched_insert_values <insert-values> \
  17. --not_matched_by_source_upsert_condition <not-matched-by-source-condition> \
  18. --not_matched_by_source_upsert_set <not-matched-upsert-changes> \
  19. --not_matched_by_source_delete_condition <not-matched-by-source-condition> \
  20. [--catalog_conf <paimon-catalog-conf> [--catalog_conf <paimon-catalog-conf> ...]]
  21. You can pass sqls by '--source_sql <sql> [, --source_sql <sql> ...]' to config environment and create source table at runtime.
  22. -- Examples:
  23. -- Find all orders mentioned in the source table, then mark as important if the price is above 100
  24. -- or delete if the price is under 10.
  25. ./flink run \
  26. /path/to/paimon-flink-action-0.9.0.jar \
  27. merge_into \
  28. --warehouse <warehouse-path> \
  29. --database <database-name> \
  30. --table T \
  31. --source_table S \
  32. --on "T.id = S.order_id" \
  33. --merge_actions \
  34. matched-upsert,matched-delete \
  35. --matched_upsert_condition "T.price > 100" \
  36. --matched_upsert_set "mark = 'important'" \
  37. --matched_delete_condition "T.price < 10"
  38. -- For matched order rows, increase the price, and if there is no match, insert the order from the
  39. -- source table:
  40. ./flink run \
  41. /path/to/paimon-flink-action-0.9.0.jar \
  42. merge_into \
  43. --warehouse <warehouse-path> \
  44. --database <database-name> \
  45. --table T \
  46. --source_table S \
  47. --on "T.id = S.order_id" \
  48. --merge_actions \
  49. matched-upsert,not-matched-insert \
  50. --matched_upsert_set "price = T.price + 20" \
  51. --not_matched_insert_values *
  52. -- For not matched by source order rows (which are in the target table and does not match any row in the
  53. -- source table based on the merge-condition), decrease the price or if the mark is 'trivial', delete them:
  54. ./flink run \
  55. /path/to/paimon-flink-action-0.9.0.jar \
  56. merge_into \
  57. --warehouse <warehouse-path> \
  58. --database <database-name> \
  59. --table T \
  60. --source_table S \
  61. --on "T.id = S.order_id" \
  62. --merge_actions \
  63. not-matched-by-source-upsert,not-matched-by-source-delete \
  64. --not_matched_by_source_upsert_condition "T.mark <> 'trivial'" \
  65. --not_matched_by_source_upsert_set "price = T.price - 20" \
  66. --not_matched_by_source_delete_condition "T.mark = 'trivial'"
  67. -- A --source_sql example:
  68. -- Create a temporary view S in new catalog and use it as source table
  69. ./flink run \
  70. /path/to/paimon-flink-action-0.9.0.jar \
  71. merge_into \
  72. --warehouse <warehouse-path> \
  73. --database <database-name> \
  74. --table T \
  75. --source_sql "CREATE CATALOG test_cat WITH (...)" \
  76. --source_sql "CREATE TEMPORARY VIEW test_cat.`default`.S AS SELECT order_id, price, 'important' FROM important_order" \
  77. --source_table test_cat.default.S \
  78. --on "T.id = S.order_id" \
  79. --merge_actions not-matched-insert\
  80. --not_matched_insert_values *

The term ‘matched’ explanation:

  1. matched: changed rows are from target table and each can match a source table row based on merge-condition and optional matched-condition (source ∩ target).
  2. not matched: changed rows are from source table and all rows cannot match any target table row based on merge-condition and optional not_matched_condition (source - target).
  3. not matched by source: changed rows are from target table and all row cannot match any source table row based on merge-condition and optional not-matched-by-source-condition (target - source).

Parameters format:

  1. matched_upsert_changes:
    col = <source_table>.col | expression [, …] (Means setting <target_table>.col with given value. Do not add ‘<target_table>.’ before ‘col’.)
    Especially, you can use ‘*’ to set columns with all source columns (require target table’s schema is equal to source’s).
  2. not_matched_upsert_changes is similar to matched_upsert_changes, but you cannot reference source table’s column or use ‘*’.
  3. insert_values:
    col1, col2, …, col_end
    Must specify values of all columns. For each column, you can reference <source_table>.col or use an expression.
    Especially, you can use ‘*’ to insert with all source columns (require target table’s schema is equal to source’s).
  4. not_matched_condition cannot use target table’s columns to construct condition expression.
  5. not_matched_by_source_condition cannot use source table’s columns to construct condition expression.
  1. Target alias cannot be duplicated with existed table name.
  2. If the source table is not in the current catalog and current database, the source-table-name must be qualified (database.table or catalog.database.table if created a new catalog). For examples:
    (1) If source table ‘my_source’ is in ‘my_db’, qualify it:
    --source_table “my_db.my_source”
    (2) Example for sqls:
    When sqls changed current catalog and database, it’s OK to not qualify the source table name:
    --source_sql “CREATE CATALOG my_cat WITH (…)”
    --source_sql “USE CATALOG my_cat”
    --source_sql “CREATE DATABASE my_db”
    --source_sql “USE my_db”
    --source_sql “CREATE TABLE S …”
    --source_table S
    but you must qualify it in the following case:
    --source_sql “CREATE CATALOG my_cat WITH (…)”
    --source_sql “CREATE TABLE my_cat.`default`.S …”
    --source_table my_cat.default.S
    You can use just ‘S’ as source table name in following arguments.
  3. At least one merge action must be specified.
  4. If both matched-upsert and matched-delete actions are present, their conditions must both be present too (same to not-matched-by-source-upsert and not-matched-by-source-delete). Otherwise, all conditions are optional.
  5. All conditions, set changes and values should use Flink SQL syntax. To ensure the whole command runs normally in Shell, please quote them with “” to escape blank spaces and use ‘\’ to escape special characters in statement. For example:
    --source_sql “CREATE TABLE T (k INT) WITH (‘special-key’ = ‘123\!’)”

For more information of ‘merge_into’, see

  1. <FLINK_HOME>/bin/flink run \
  2. /path/to/paimon-flink-action-0.9.0.jar \
  3. merge_into --help

Deleting from table

In Flink 1.16 and previous versions, Paimon only supports deleting records via submitting the ‘delete’ job through flink run.

Run the following command to submit a ‘delete’ job for the table.

  1. <FLINK_HOME>/bin/flink run \
  2. /path/to/paimon-flink-action-0.9.0.jar \
  3. delete \
  4. --warehouse <warehouse-path> \
  5. --database <database-name> \
  6. --table <table-name> \
  7. --where <filter_spec> \
  8. [--catalog_conf <paimon-catalog-conf> [--catalog_conf <paimon-catalog-conf> ...]]
  9. filter_spec is equal to the 'WHERE' clause in SQL DELETE statement. Examples:
  10. age >= 18 AND age <= 60
  11. animal <> 'cat'
  12. id > (SELECT count(*) FROM employee)

For more information of ‘delete’, see

  1. <FLINK_HOME>/bin/flink run \
  2. /path/to/paimon-flink-action-0.9.0.jar \
  3. delete --help

Drop Partition

Run the following command to submit a drop_partition job for the table.

  1. <FLINK_HOME>/bin/flink run \
  2. /path/to/paimon-flink-action-0.9.0.jar \
  3. drop_partition \
  4. --warehouse <warehouse-path> \
  5. --database <database-name> \
  6. --table <table-name> \
  7. [--partition <partition_spec> [--partition <partition_spec> ...]] \
  8. [--catalog_conf <paimon-catalog-conf> [--catalog_conf <paimon-catalog-conf> ...]]
  9. partition_spec:
  10. key1=value1,key2=value2...

For more information of drop_partition, see

  1. <FLINK_HOME>/bin/flink run \
  2. /path/to/paimon-flink-action-0.9.0.jar \
  3. drop_partition --help

Rewrite File Index

Run the following command to submit a rewrite_file_index job for the table.

  1. <FLINK_HOME>/bin/flink run \
  2. /path/to/paimon-flink-action-0.9.0.jar \
  3. rewrite_file_index \
  4. --warehouse <warehouse-path> \
  5. --identifier <database.table> \
  6. [--catalog_conf <paimon-catalog-conf> [--catalog_conf <paimon-catalog-conf> ...]]

For more information of rewrite_file_index, see

  1. <FLINK_HOME>/bin/flink run \
  2. /path/to/paimon-flink-action-0.9.0.jar \
  3. rewrite_file_index --help