SQL Query

Just like all other tables, Paimon tables can be queried with SELECT statement.

Batch Query

Paimon’s batch read returns all the data in a snapshot of the table. By default, batch reads return the latest snapshot.

  1. -- Flink SQL
  2. SET 'execution.runtime-mode' = 'batch';

Batch Time Travel

Paimon batch reads with time travel can specify a snapshot or a tag and read the corresponding data.

Flink (dynamic option)

  1. -- read the snapshot with id 1L
  2. SELECT * FROM t /*+ OPTIONS('scan.snapshot-id' = '1') */;
  3. -- read the snapshot from specified timestamp in unix milliseconds
  4. SELECT * FROM t /*+ OPTIONS('scan.timestamp-millis' = '1678883047356') */;
  5. -- read the snapshot from specified timestamp string ,it will be automatically converted to timestamp in unix milliseconds
  6. -- Supported formats includeyyyy-MM-dd, yyyy-MM-dd HH:mm:ss, yyyy-MM-dd HH:mm:ss.SSS, use default local time zone
  7. SELECT * FROM t /*+ OPTIONS('scan.timestamp' = '2023-12-09 23:09:12') */;
  8. -- read tag 'my-tag'
  9. SELECT * FROM t /*+ OPTIONS('scan.tag-name' = 'my-tag') */;
  10. -- read the snapshot from watermark, will match the first snapshot after the watermark
  11. SELECT * FROM t /*+ OPTIONS('scan.watermark' = '1678883047356') */;

Flink 1.18+

Flink SQL supports time travel syntax after 1.18.

  1. -- read the snapshot from specified timestamp
  2. SELECT * FROM t FOR SYSTEM_TIME AS OF TIMESTAMP '2023-01-01 00:00:00';
  3. -- you can also use some simple expressions (see flink document to get supported functions)
  4. SELECT * FROM t FOR SYSTEM_TIME AS OF TIMESTAMP '2023-01-01 00:00:00' + INTERVAL '1' DAY

Batch Incremental

Read incremental changes between start snapshot (exclusive) and end snapshot.

For example:

  • ‘5,10’ means changes between snapshot 5 and snapshot 10.
  • ‘TAG1,TAG3’ means changes between TAG1 and TAG3.
  1. -- incremental between snapshot ids
  2. SELECT * FROM t /*+ OPTIONS('incremental-between' = '12,20') */;
  3. -- incremental between snapshot time mills
  4. SELECT * FROM t /*+ OPTIONS('incremental-between-timestamp' = '1692169000000,1692169900000') */;

By default, will scan changelog files for the table which produces changelog files. Otherwise, scan newly changed files. You can also force specifying 'incremental-between-scan-mode'.

In Batch SQL, the DELETE records are not allowed to be returned, so records of -D will be dropped. If you want see DELETE records, you can use audit_log table:

  1. SELECT * FROM t$audit_log /*+ OPTIONS('incremental-between' = '12,20') */;

Streaming Query

By default, Streaming read produces the latest snapshot on the table upon first startup, and continue to read the latest changes.

Paimon by default ensures that your startup is properly processed with all data included.

Paimon Source in Streaming mode is unbounded, like a queue that never ends.

  1. -- Flink SQL
  2. SET 'execution.runtime-mode' = 'streaming';

You can also do streaming read without the snapshot data, you can use latest scan mode:

  1. -- Continuously reads latest changes without producing a snapshot at the beginning.
  2. SELECT * FROM t /*+ OPTIONS('scan.mode' = 'latest') */;

Streaming Time Travel

If you only want to process data for today and beyond, you can do so with partitioned filters:

  1. SELECT * FROM t WHERE dt > '2023-06-26';

If it’s not a partitioned table, or you can’t filter by partition, you can use Time travel’s stream read.

Flink (dynamic option)

  1. -- read changes from snapshot id 1L
  2. SELECT * FROM t /*+ OPTIONS('scan.snapshot-id' = '1') */;
  3. -- read changes from snapshot specified timestamp
  4. SELECT * FROM t /*+ OPTIONS('scan.timestamp-millis' = '1678883047356') */;
  5. -- read snapshot id 1L upon first startup, and continue to read the changes
  6. SELECT * FROM t /*+ OPTIONS('scan.mode'='from-snapshot-full','scan.snapshot-id' = '1') */;

Flink 1.18+

Flink SQL supports time travel syntax after 1.18.

  1. -- read the snapshot from specified timestamp
  2. SELECT * FROM t FOR SYSTEM_TIME AS OF TIMESTAMP '2023-01-01 00:00:00';
  3. -- you can also use some simple expressions (see flink document to get supported functions)
  4. SELECT * FROM t FOR SYSTEM_TIME AS OF TIMESTAMP '2023-01-01 00:00:00' + INTERVAL '1' DAY

Time travel’s stream read rely on snapshots, but by default, snapshot only retains data within 1 hour, which can prevent you from reading older incremental data. So, Paimon also provides another mode for streaming reads, scan.file-creation-time-millis, which provides a rough filtering to retain files generated after timeMillis.

  1. SELECT * FROM t /*+ OPTIONS('scan.file-creation-time-millis' = '1678883047356') */;

Consumer ID

You can specify the consumer-id when streaming read table:

  1. SELECT * FROM t /*+ OPTIONS('consumer-id' = 'myid', 'consumer.expiration-time' = '1 d', 'consumer.mode' = 'at-least-once') */;

When stream read Paimon tables, the next snapshot id to be recorded into the file system. This has several advantages:

  1. When previous job is stopped, the newly started job can continue to consume from the previous progress without resuming from the state. The newly reading will start reading from next snapshot id found in consumer files. If you don’t want this behavior, you can set 'consumer.ignore-progress' to true.
  2. When deciding whether a snapshot has expired, Paimon looks at all the consumers of the table in the file system, and if there are consumers that still depend on this snapshot, then this snapshot will not be deleted by expiration.

NOTE 1: The consumer will prevent expiration of the snapshot. You can specify 'consumer.expiration-time' to manage the lifetime of consumers.

NOTE 2: If you don’t want to affect the checkpoint time, you need to configure 'consumer.mode' = 'at-least-once'. This mode allow readers consume snapshots at different rates and record the slowest snapshot-id among all readers into the consumer. This mode can provide more capabilities, such as watermark alignment.

NOTE 3: About 'consumer.mode', since the implementation of exactly-once mode and at-least-once mode are completely different, the state of flink is incompatible and cannot be restored from the state when switching modes.

You can reset a consumer with a given consumer ID and next snapshot ID and delete a consumer with a given consumer ID. First, you need to stop the streaming task using this consumer ID, and then execute the reset consumer action job.

Run the following command:

  1. <FLINK_HOME>/bin/flink run \
  2. /path/to/paimon-flink-action-0.9.0.jar \
  3. reset-consumer \
  4. --warehouse <warehouse-path> \
  5. --database <database-name> \
  6. --table <table-name> \
  7. --consumer_id <consumer-id> \
  8. [--next_snapshot <next-snapshot-id>] \
  9. [--catalog_conf <paimon-catalog-conf> [--catalog_conf <paimon-catalog-conf> ...]]

please don’t specify –next_snapshot parameter if you want to delete the consumer.

Read Overwrite

Streaming reading will ignore the commits generated by INSERT OVERWRITE by default. If you want to read the commits of OVERWRITE, you can configure streaming-read-overwrite.

Read Parallelism

By default, the parallelism of batch reads is the same as the number of splits, while the parallelism of stream reads is the same as the number of buckets, but not greater than scan.infer-parallelism.max.

Disable scan.infer-parallelism, global parallelism will be used for reads.

You can also manually specify the parallelism from scan.parallelism.

KeyDefaultTypeDescription
scan.infer-parallelism
trueBooleanIf it is false, parallelism of source are set by global parallelism. Otherwise, source parallelism is inferred from splits number (batch mode) or bucket number(streaming mode).
scan.infer-parallelism.max
1024IntegerIf scan.infer-parallelism is true, limit the parallelism of source through this option.
scan.parallelism
(none)IntegerDefine a custom parallelism for the scan source. By default, if this option is not defined, the planner will derive the parallelism for each statement individually by also considering the global configuration. If user enable the scan.infer-parallelism, the planner will derive the parallelism by inferred parallelism.

Query Optimization

Batch Streaming

It is highly recommended to specify partition and primary key filters along with the query, which will speed up the data skipping of the query.

The filter functions that can accelerate data skipping are:

  • =
  • <
  • <=
  • >
  • >=
  • IN (...)
  • LIKE 'abc%'
  • IS NULL

Paimon will sort the data by primary key, which speeds up the point queries and range queries. When using a composite primary key, it is best for the query filters to form a leftmost prefix of the primary key for good acceleration.

Suppose that a table has the following specification:

  1. CREATE TABLE orders (
  2. catalog_id BIGINT,
  3. order_id BIGINT,
  4. .....,
  5. PRIMARY KEY (catalog_id, order_id) NOT ENFORCED -- composite primary key
  6. );

The query obtains a good acceleration by specifying a range filter for the leftmost prefix of the primary key.

  1. SELECT * FROM orders WHERE catalog_id=1025;
  2. SELECT * FROM orders WHERE catalog_id=1025 AND order_id=29495;
  3. SELECT * FROM orders
  4. WHERE catalog_id=1025
  5. AND order_id>2035 AND order_id<6000;

However, the following filter cannot accelerate the query well.

  1. SELECT * FROM orders WHERE order_id=29495;
  2. SELECT * FROM orders WHERE catalog_id=1025 OR order_id=29495;