Aggregation

NOTE: Always set table.exec.sink.upsert-materialize to NONE in Flink SQL TableConfig.

Sometimes users only care about aggregated results. The aggregation merge engine aggregates each value field with the latest data one by one under the same primary key according to the aggregate function.

Each field not part of the primary keys can be given an aggregate function, specified by the fields.<field-name>.aggregate-function table property, otherwise it will use last_non_null_value aggregation as default. For example, consider the following table definition.

Flink

  1. CREATE TABLE my_table (
  2. product_id BIGINT,
  3. price DOUBLE,
  4. sales BIGINT,
  5. PRIMARY KEY (product_id) NOT ENFORCED
  6. ) WITH (
  7. 'merge-engine' = 'aggregation',
  8. 'fields.price.aggregate-function' = 'max',
  9. 'fields.sales.aggregate-function' = 'sum'
  10. );

Field price will be aggregated by the max function, and field sales will be aggregated by the sum function. Given two input records <1, 23.0, 15> and <1, 30.2, 20>, the final result will be <1, 30.2, 35>.

Aggregation Functions

Current supported aggregate functions and data types are:

sum

The sum function aggregates the values across multiple rows. It supports DECIMAL, TINYINT, SMALLINT, INTEGER, BIGINT, FLOAT, and DOUBLE data types.

product

The product function can compute product values across multiple lines. It supports DECIMAL, TINYINT, SMALLINT, INTEGER, BIGINT, FLOAT, and DOUBLE data types.

count

In scenarios where counting rows that match a specific condition is required, you can use the SUM function to achieve this. By expressing a condition as a Boolean value (TRUE or FALSE) and converting it into a numerical value, you can effectively count the rows. In this approach, TRUE is converted to 1, and FALSE is converted to 0.

For example, if you have a table orders and want to count the number of rows that meet a specific condition, you can use the following query:

  1. SELECT SUM(CASE WHEN condition THEN 1 ELSE 0 END) AS count
  2. FROM orders;

max

The max function identifies and retains the maximum value. It supports CHAR, VARCHAR, DECIMAL, TINYINT, SMALLINT, INTEGER, BIGINT, FLOAT, DOUBLE, DATE, TIME, TIMESTAMP, and TIMESTAMP_LTZ data types.

min

The min function identifies and retains the minimum value. It supports CHAR, VARCHAR, DECIMAL, TINYINT, SMALLINT, INTEGER, BIGINT, FLOAT, DOUBLE, DATE, TIME, TIMESTAMP, and TIMESTAMP_LTZ data types.

last_value

The last_value function replaces the previous value with the most recently imported value. It supports all data types.

last_non_null_value

The last_non_null_value function replaces the previous value with the latest non-null value. It supports all data types.

listagg

The listagg function concatenates multiple string values into a single string. It supports STRING data type. Each field not part of the primary keys can be given a list agg delimiter, specified by the fields..list-agg-delimiter table property, otherwise it will use “,” as default.

bool_and

The bool_and function evaluates whether all values in a boolean set are true. It supports BOOLEAN data type.

bool_or

The bool_or function checks if at least one value in a boolean set is true. It supports BOOLEAN data type.

first_value

The first_value function retrieves the first null value from a data set. It supports all data types.

first_non_null_value

The first_non_null_value function selects the first non-null value in a data set. It supports all data types.

rbm32

The rbm32 function aggregates multiple serialized 32-bit RoaringBitmap into a single RoaringBitmap. It supports VARBINARY data type.

rbm64

The rbm64 function aggregates multiple serialized 64-bit Roaring64Bitmap into a single Roaring64Bitmap. It supports VARBINARY data type.

nested_update

The nested_update function collects multiple rows into one array (so-called ‘nested table’). It supports ARRAY data types.

Use fields.<field-name>.nested-key=pk0,pk1,... to specify the primary keys of the nested table. If no keys, row will be appended to array.

An example:

Flink

  1. -- orders table
  2. CREATE TABLE orders (
  3. order_id BIGINT PRIMARY KEY NOT ENFORCED,
  4. user_name STRING,
  5. address STRING
  6. );
  7. -- sub orders that have the same order_id
  8. -- belongs to the same order
  9. CREATE TABLE sub_orders (
  10. order_id BIGINT,
  11. sub_order_id INT,
  12. product_name STRING,
  13. price BIGINT,
  14. PRIMARY KEY (order_id, sub_order_id) NOT ENFORCED
  15. );
  16. -- wide table
  17. CREATE TABLE order_wide (
  18. order_id BIGINT PRIMARY KEY NOT ENFORCED,
  19. user_name STRING,
  20. address STRING,
  21. sub_orders ARRAY<ROW<sub_order_id BIGINT, product_name STRING, price BIGINT>>
  22. ) WITH (
  23. 'merge-engine' = 'aggregation',
  24. 'fields.sub_orders.aggregate-function' = 'nested_update',
  25. 'fields.sub_orders.nested-key' = 'sub_order_id'
  26. );
  27. -- widen
  28. INSERT INTO order_wide
  29. SELECT
  30. order_id,
  31. user_name,
  32. address,
  33. CAST (NULL AS ARRAY<ROW<sub_order_id BIGINT, product_name STRING, price BIGINT>>)
  34. FROM orders
  35. UNION ALL
  36. SELECT
  37. order_id,
  38. CAST (NULL AS STRING),
  39. CAST (NULL AS STRING),
  40. ARRAY[ROW(sub_order_id, product_name, price)]
  41. FROM sub_orders;
  42. -- query using UNNEST
  43. SELECT order_id, user_name, address, sub_order_id, product_name, price
  44. FROM order_wide, UNNEST(sub_orders) AS so(sub_order_id, product_name, price)

collect

The collect function collects elements into an Array. You can set fields.<field-name>.distinct=true to deduplicate elements. It only supports ARRAY type.

merge_map

The merge_map function merge input maps. It only supports MAP type.

Types of cardinality sketches

Paimon uses the Apache DataSketches library of stochastic streaming algorithms to implement sketch modules. The DataSketches library includes various types of sketches, each one designed to solve a different sort of problem. Paimon supports HyperLogLog (HLL) and Theta cardinality sketches.

HyperLogLog

The HyperLogLog (HLL) sketch aggregator is a very compact sketch algorithm for approximate distinct counting. You can also use the HLL aggregator to calculate a union of HLL sketches.

Theta

The Theta sketch is a sketch algorithm for approximate distinct counting with set operations. Theta sketches let you count the overlap between sets, so that you can compute the union, intersection, or set difference between sketch objects.

Choosing a sketch type

HLL and Theta sketches both support approximate distinct counting; however, the HLL sketch produces more accurate results and consumes less storage space. Theta sketches are more flexible but require significantly more memory.

When choosing an approximation algorithm for your use case, consider the following:

If your use case entails distinct counting and merging sketch objects, use the HLL sketch. If you need to evaluate union, intersection, or difference set operations, use the Theta sketch. You cannot merge HLL sketches with Theta sketches.

hll_sketch

The hll_sketch function aggregates multiple serialized Sketch objects into a single Sketch. It supports VARBINARY data type.

An example:

Flink

  1. -- source table
  2. CREATE TABLE VISITS (
  3. id INT PRIMARY KEY NOT ENFORCED,
  4. user_id STRING
  5. );
  6. -- agg table
  7. CREATE TABLE UV_AGG (
  8. id INT PRIMARY KEY NOT ENFORCED,
  9. uv VARBINARY
  10. ) WITH (
  11. 'merge-engine' = 'aggregation',
  12. 'fields.f0.aggregate-function' = 'hll_sketch'
  13. );
  14. -- Register the following class as a Flink function with the name "HLL_SKETCH"
  15. -- which is used to transform input to sketch bytes array:
  16. --
  17. -- public static class HllSketchFunction extends ScalarFunction {
  18. -- public byte[] eval(String user_id) {
  19. -- HllSketch hllSketch = new HllSketch();
  20. -- hllSketch.update(id);
  21. -- return hllSketch.toCompactByteArray();
  22. -- }
  23. -- }
  24. --
  25. INSERT INTO UV_AGG SELECT id, HLL_SKETCH(user_id) FROM VISITS;
  26. -- Register the following class as a Flink function with the name "HLL_SKETCH_COUNT"
  27. -- which is used to get cardinality from sketch bytes array:
  28. --
  29. -- public static class HllSketchCountFunction extends ScalarFunction {
  30. -- public Double eval(byte[] sketchBytes) {
  31. -- if (sketchBytes == null) {
  32. -- return 0d;
  33. -- }
  34. -- return HllSketch.heapify(sketchBytes).getEstimate();
  35. -- }
  36. -- }
  37. --
  38. -- Then we can get user cardinality based on the aggregated field.
  39. SELECT id, HLL_SKETCH_COUNT(UV) as uv FROM UV_AGG;

theta_sketch

The theta_sketch function aggregates multiple serialized Sketch objects into a single Sketch. It supports VARBINARY data type.

An example:

Flink

  1. -- source table
  2. CREATE TABLE VISITS (
  3. id INT PRIMARY KEY NOT ENFORCED,
  4. user_id STRING
  5. );
  6. -- agg table
  7. CREATE TABLE UV_AGG (
  8. id INT PRIMARY KEY NOT ENFORCED,
  9. uv VARBINARY
  10. ) WITH (
  11. 'merge-engine' = 'aggregation',
  12. 'fields.f0.aggregate-function' = 'theta_sketch'
  13. );
  14. -- Register the following class as a Flink function with the name "THETA_SKETCH"
  15. -- which is used to transform input to sketch bytes array:
  16. --
  17. -- public static class ThetaSketchFunction extends ScalarFunction {
  18. -- public byte[] eval(String user_id) {
  19. -- UpdateSketch updateSketch = UpdateSketch.builder().build();
  20. -- updateSketch.update(user_id);
  21. -- return updateSketch.compact().toByteArray();
  22. -- }
  23. -- }
  24. --
  25. INSERT INTO UV_AGG SELECT id, THETA_SKETCH(user_id) FROM VISITS;
  26. -- Register the following class as a Flink function with the name "THETA_SKETCH_COUNT"
  27. -- which is used to get cardinality from sketch bytes array:
  28. --
  29. -- public static class ThetaSketchCountFunction extends ScalarFunction {
  30. -- public Double eval(byte[] sketchBytes) {
  31. -- if (sketchBytes == null) {
  32. -- return 0d;
  33. -- }
  34. -- return Sketches.wrapCompactSketch(Memory.wrap(sketchBytes)).getEstimate();
  35. -- }
  36. -- }
  37. --
  38. -- Then we can get user cardinality based on the aggregated field.
  39. SELECT id, THETA_SKETCH_COUNT(UV) as uv FROM UV_AGG;

For streaming queries, aggregation merge engine must be used together with lookup or full-compaction changelog producer. (‘input’ changelog producer is also supported, but only returns input records.)

Retraction

Only sum, product, collect, merge_map, nested_update, last_value and last_non_null_value supports retraction (UPDATE_BEFORE and DELETE), others aggregate functions do not support retraction. If you allow some functions to ignore retraction messages, you can configure: 'fields.${field_name}.ignore-retract'='true'.

The last_value and last_non_null_value just set field to null when accept retract messages.

The collect and merge_map make a best-effort attempt to handle retraction messages, but the results are not guaranteed to be accurate. The following behaviors may occur when processing retraction messages:

  1. It might fail to handle retraction messages if records are disordered. For example, the table uses collect, and the upstreams send +I['A', 'B'] and -U['A'] respectively. If the table receives -U['A'] first, it can do nothing; then it receives +I['A', 'B'], the merge result will be +I['A', 'B'] instead of +I['B'].

  2. The retract message from one upstream will retract the result merged from multiple upstreams. For example, the table uses merge_map, and one upstream sends +I[1->A], another upstream sends +I[1->B], -D[1->B] later. The table will merge two insert values to +I[1->B] first, and then the -D[1->B] will retract the whole result, so the final result is an empty map instead of +I[1->A]