AUTO_INCREMENT Column

SinceVersion 2.1

When importing data, Doris assigns a table-unique value to rows that do not have specified values in the auto-increment column.

Functionality

For tables containing an auto-increment column, during data import:

  • If the target columns don’t include the auto-increment column, Doris will populate the auto-increment column with generated values.
  • If the target columns include the auto-increment column, null values in the imported data for that column will be replaced by values generated by Doris, while non-null values will remain unchanged. Note that non-null values can disrupt the uniqueness of the auto-increment column values.

Uniqueness

Doris ensures that values generated on the auto-increment column have table-wide uniqueness. However, it’s important to note that the uniqueness of the auto-increment column only guarantees uniqueness for values automatically filled by Doris and does not consider values provided by users. If a user explicitly inserts user-provided values for this table by specifying the auto-increment column, this uniqueness cannot be guaranteed.

Density

Doris ensures that the values generated on the auto-increment column are dense, but it cannot guarantee that the values automatically generated in the auto-increment column during an import will be entirely contiguous. Thus, there might be some jumps in the values generated by the auto-increment column during an import. This is because, for performance consideration, each BE caches a portion of pre-allocated auto-increment column values, and these cached values do not intersect between different BEs. Additionally, due to this caching mechanism, Doris cannot guarantee that the values automatically generated on the auto-increment column in a later import on the physical timeline will be larger than those from the previous import. Therefore, the values allocated by the auto-increment column cannot be used to determine the chronological order of imports.

Syntax

To use auto-increment columns, you need to add the AUTO_INCREMENT attribute to the corresponding column during table creation (CREATE-TABLE). To manually specify the starting value for an auto-increment column, you can do so by using the AUTO_INCREMENT(start_value) statement when creating the table. If not specified, the default starting value is 1.

Examples

  1. Creating a Duplicate table with one key column as an auto-increment column:

    1. CREATE TABLE `demo`.`tbl` (
    2. `id` BIGINT NOT NULL AUTO_INCREMENT,
    3. `value` BIGINT NOT NULL
    4. ) ENGINE=OLAP
    5. DUPLICATE KEY(`id`)
    6. DISTRIBUTED BY HASH(`id`) BUCKETS 10
    7. PROPERTIES (
    8. "replication_allocation" = "tag.location.default: 3"
    9. );
  2. Creating a Duplicate table with one key column as an auto-increment column, and set start value is 100:

    1. CREATE TABLE `demo`.`tbl` (
    2. `id` BIGINT NOT NULL AUTO_INCREMENT(100),
    3. `value` BIGINT NOT NULL
    4. ) ENGINE=OLAP
    5. DUPLICATE KEY(`id`)
    6. DISTRIBUTED BY HASH(`id`) BUCKETS 10
    7. PROPERTIES (
    8. "replication_allocation" = "tag.location.default: 3"
    9. );
  3. Creating a Duplicate table with one value column as an auto-increment column:

    1. CREATE TABLE `demo`.`tbl` (
    2. `uid` BIGINT NOT NULL,
    3. `name` BIGINT NOT NULL,
    4. `id` BIGINT NOT NULL AUTO_INCREMENT,
    5. `value` BIGINT NOT NULL
    6. ) ENGINE=OLAP
    7. DUPLICATE KEY(`uid`, `name`)
    8. DISTRIBUTED BY HASH(`uid`) BUCKETS 10
    9. PROPERTIES (
    10. "replication_allocation" = "tag.location.default: 3"
    11. );
  4. Creating a Unique tbl table with one key column as an auto-increment column:

    1. CREATE TABLE `demo`.`tbl` (
    2. `id` BIGINT NOT NULL AUTO_INCREMENT,
    3. `name` varchar(65533) NOT NULL,
    4. `value` int(11) NOT NULL
    5. ) ENGINE=OLAP
    6. UNIQUE KEY(`id`)
    7. DISTRIBUTED BY HASH(`id`) BUCKETS 10
    8. PROPERTIES (
    9. "replication_allocation" = "tag.location.default: 3",
    10. "enable_unique_key_merge_on_write" = "true"
    11. );
  5. Creating a Unique tbl table with one value column as an auto-increment column:

    1. CREATE TABLE `demo`.`tbl` (
    2. `text` varchar(65533) NOT NULL,
    3. `id` BIGINT NOT NULL AUTO_INCREMENT,
    4. ) ENGINE=OLAP
    5. UNIQUE KEY(`text`)
    6. DISTRIBUTED BY HASH(`text`) BUCKETS 10
    7. PROPERTIES (
    8. "replication_allocation" = "tag.location.default: 3",
    9. "enable_unique_key_merge_on_write" = "true"
    10. );

Constraints and Limitations

  • Only Duplicate model tables and Unique model tables can contain auto-increment columns.
  • A table can contain at most one auto-increment column.
  • The type of the auto-increment column must be BIGINT and must be NOT NULL.
  • The manually specified starting value for an auto-increment column must be greater than or equal to 0.

Usage

Import

Consider the following table:

  1. CREATE TABLE `demo`.`tbl` (
  2. `id` BIGINT NOT NULL AUTO_INCREMENT,
  3. `name` varchar(65533) NOT NULL,
  4. `value` int(11) NOT NULL
  5. ) ENGINE=OLAP
  6. UNIQUE KEY(`id`)
  7. DISTRIBUTED BY HASH(`id`) BUCKETS 10
  8. PROPERTIES (
  9. "replication_allocation" = "tag.location.default: 3",
  10. "enable_unique_key_merge_on_write" = "true"
  11. );

When using the insert into statement to import data without specifying the auto-increment column id, the id column will automatically be filled with generated values.

  1. mysql> insert into tbl(name, value) values("Bob", 10), ("Alice", 20), ("Jack", 30);
  2. Query OK, 3 rows affected (0.09 sec)
  3. {'label':'label_183babcb84ad4023_a2d6266ab73fb5aa', 'status':'VISIBLE', 'txnId':'7'}
  4. mysql> select * from tbl order by id;
  5. +------+-------+-------+
  6. | id | name | value |
  7. +------+-------+-------+
  8. | 1 | Bob | 10 |
  9. | 2 | Alice | 20 |
  10. | 3 | Jack | 30 |
  11. +------+-------+-------+
  12. 3 rows in set (0.05 sec)

Similarly, using stream load to import the file test.csv without specifying the auto-increment column id will result in the id column being automatically filled with generated values.

test.csv:

  1. Tom, 40
  2. John, 50
  1. curl --location-trusted -u user:passwd -H "columns:name,value" -H "column_separator:," -T ./test1.csv http://{host}:{port}/api/{db}/tbl/_stream_load
  1. mysql> select * from tbl order by id;
  2. +------+-------+-------+
  3. | id | name | value |
  4. +------+-------+-------+
  5. | 1 | Bob | 10 |
  6. | 2 | Alice | 20 |
  7. | 3 | Jack | 30 |
  8. | 4 | Tom | 40 |
  9. | 5 | John | 50 |
  10. +------+-------+-------+
  11. 5 rows in set (0.04 sec)

When importing using insert into statement while specifying the auto-increment column id, null values in the imported data for that column will be replaced by generated values.

  1. mysql> insert into tbl(id, name, value) values(null, "Doris", 60), (null, "Nereids", 70);
  2. Query OK, 2 rows affected (0.07 sec)
  3. {'label':'label_9cb0c01db1a0402c_a2b8b44c11ce4703', 'status':'VISIBLE', 'txnId':'10'}
  4. mysql> select * from tbl order by id;
  5. +------+---------+-------+
  6. | id | name | value |
  7. +------+---------+-------+
  8. | 1 | Bob | 10 |
  9. | 2 | Alice | 20 |
  10. | 3 | Jack | 30 |
  11. | 4 | Tom | 40 |
  12. | 5 | John | 50 |
  13. | 6 | Doris | 60 |
  14. | 7 | Nereids | 70 |
  15. +------+---------+-------+
  16. 7 rows in set (0.04 sec)

Partial Update

When performing a partial update on a merge-on-write Unique table containing an auto-increment column:

If the auto-increment column is a key column, during partial updates, as users must explicitly specify the key column, the target columns for partial column updates must include the auto-increment column. In this scenario, the import behavior is similar to regular partial updates.

  1. mysql> CREATE TABLE `demo`.`tbl2` (
  2. -> `id` BIGINT NOT NULL AUTO_INCREMENT,
  3. -> `name` varchar(65533) NOT NULL,
  4. -> `value` int(11) NOT NULL DEFAULT "0"
  5. -> ) ENGINE=OLAP
  6. -> UNIQUE KEY(`id`)
  7. -> DISTRIBUTED BY HASH(`id`) BUCKETS 10
  8. -> PROPERTIES (
  9. -> "replication_allocation" = "tag.location.default: 3",
  10. -> "enable_unique_key_merge_on_write" = "true"
  11. -> );
  12. Query OK, 0 rows affected (0.03 sec)
  13. mysql> insert into tbl2(id, name, value) values(1, "Bob", 10), (2, "Alice", 20), (3, "Jack", 30);
  14. Query OK, 3 rows affected (0.14 sec)
  15. {'label':'label_5538549c866240b6_bce75ef323ac22a0', 'status':'VISIBLE', 'txnId':'1004'}
  16. mysql> select * from tbl2 order by id;
  17. +------+-------+-------+
  18. | id | name | value |
  19. +------+-------+-------+
  20. | 1 | Bob | 10 |
  21. | 2 | Alice | 20 |
  22. | 3 | Jack | 30 |
  23. +------+-------+-------+
  24. 3 rows in set (0.08 sec)
  25. mysql> set enable_unique_key_partial_update=true;
  26. Query OK, 0 rows affected (0.01 sec)
  27. mysql> set enable_insert_strict=false;
  28. Query OK, 0 rows affected (0.00 sec)
  29. mysql> insert into tbl2(id, name) values(1, "modified"), (4, "added");
  30. Query OK, 2 rows affected (0.06 sec)
  31. {'label':'label_3e68324cfd87457d_a6166cc0a878cfdc', 'status':'VISIBLE', 'txnId':'1005'}
  32. mysql> select * from tbl2 order by id;
  33. +------+----------+-------+
  34. | id | name | value |
  35. +------+----------+-------+
  36. | 1 | modified | 10 |
  37. | 2 | Alice | 20 |
  38. | 3 | Jack | 30 |
  39. | 4 | added | 0 |
  40. +------+----------+-------+
  41. 4 rows in set (0.04 sec)

When the auto-increment column is a non-key column and users haven’t specified the value for the auto-increment column, the value will be filled from existing data rows in the table. If users specify the auto-increment column, null values in the imported data for that column will be replaced by generated values, while non-null values will remain unchanged, and then these data will be loaded with the semantics of partial updates.

  1. mysql> CREATE TABLE `demo`.`tbl3` (
  2. -> `id` BIGINT NOT NULL,
  3. -> `name` varchar(100) NOT NULL,
  4. -> `score` BIGINT NOT NULL,
  5. -> `aid` BIGINT NOT NULL AUTO_INCREMENT
  6. -> ) ENGINE=OLAP
  7. -> UNIQUE KEY(`id`)
  8. -> DISTRIBUTED BY HASH(`id`) BUCKETS 1
  9. -> PROPERTIES (
  10. -> "replication_allocation" = "tag.location.default: 3",
  11. -> "enable_unique_key_merge_on_write" = "true"
  12. -> );
  13. Query OK, 0 rows affected (0.16 sec)
  14. mysql> insert into tbl3(id, name, score) values(1, "Doris", 100), (2, "Nereids", 200), (3, "Bob", 300);
  15. Query OK, 3 rows affected (0.28 sec)
  16. {'label':'label_c52b2c246e244dda_9b91ee5e27a31f9b', 'status':'VISIBLE', 'txnId':'2003'}
  17. mysql> select * from tbl3 order by id;
  18. +------+---------+-------+------+
  19. | id | name | score | aid |
  20. +------+---------+-------+------+
  21. | 1 | Doris | 100 | 0 |
  22. | 2 | Nereids | 200 | 1 |
  23. | 3 | Bob | 300 | 2 |
  24. +------+---------+-------+------+
  25. 3 rows in set (0.13 sec)
  26. mysql> set enable_unique_key_partial_update=true;
  27. Query OK, 0 rows affected (0.00 sec)
  28. mysql> set enable_insert_strict=false;
  29. Query OK, 0 rows affected (0.00 sec)
  30. mysql> insert into tbl3(id, score) values(1, 999), (2, 888);
  31. Query OK, 2 rows affected (0.07 sec)
  32. {'label':'label_dfec927d7a4343ca_9f9ade581391de97', 'status':'VISIBLE', 'txnId':'2004'}
  33. mysql> select * from tbl3 order by id;
  34. +------+---------+-------+------+
  35. | id | name | score | aid |
  36. +------+---------+-------+------+
  37. | 1 | Doris | 999 | 0 |
  38. | 2 | Nereids | 888 | 1 |
  39. | 3 | Bob | 300 | 2 |
  40. +------+---------+-------+------+
  41. 3 rows in set (0.06 sec)
  42. mysql> insert into tbl3(id, aid) values(1, 1000), (3, 500);
  43. Query OK, 2 rows affected (0.07 sec)
  44. {'label':'label_b26012959f714f60_abe23c87a06aa0bf', 'status':'VISIBLE', 'txnId':'2005'}
  45. mysql> select * from tbl3 order by id;
  46. +------+---------+-------+------+
  47. | id | name | score | aid |
  48. +------+---------+-------+------+
  49. | 1 | Doris | 999 | 1000 |
  50. | 2 | Nereids | 888 | 1 |
  51. | 3 | Bob | 300 | 500 |
  52. +------+---------+-------+------+
  53. 3 rows in set (0.06 sec)

Usage Scenarios

Dictionary Encoding

Using bitmaps for audience analysis in user profile requires building a user dictionary where each user corresponds to a unique integer dictionary value. Aggregating these dictionary values can improve the performance of bitmap.

Taking the offline UV and PV analysis scenario as an example, assuming there’s a detailed user behavior table:

  1. CREATE TABLE `demo`.`dwd_dup_tbl` (
  2. `user_id` varchar(50) NOT NULL,
  3. `dim1` varchar(50) NOT NULL,
  4. `dim2` varchar(50) NOT NULL,
  5. `dim3` varchar(50) NOT NULL,
  6. `dim4` varchar(50) NOT NULL,
  7. `dim5` varchar(50) NOT NULL,
  8. `visit_time` DATE NOT NULL
  9. ) ENGINE=OLAP
  10. DUPLICATE KEY(`user_id`)
  11. DISTRIBUTED BY HASH(`user_id`) BUCKETS 32
  12. PROPERTIES (
  13. "replication_allocation" = "tag.location.default: 3"
  14. );

Using the auto-incrementa column to create the following dictionary table:

  1. CREATE TABLE `demo`.`dictionary_tbl` (
  2. `user_id` varchar(50) NOT NULL,
  3. `aid` BIGINT NOT NULL AUTO_INCREMENT
  4. ) ENGINE=OLAP
  5. UNIQUE KEY(`user_id`)
  6. DISTRIBUTED BY HASH(`user_id`) BUCKETS 32
  7. PROPERTIES (
  8. "replication_allocation" = "tag.location.default: 3",
  9. "enable_unique_key_merge_on_write" = "true"
  10. );

Import the value of user_id from existing data into the dictionary table, establishing the mapping of user_id to integer values:

  1. insert into dit_tbl(user_id)
  2. select user_id from dwd_dup_tbl group by user_id;

Or import only the value of user_id in incrementa data into the dictionary table alternatively:

  1. insert into dit_tbl(user_id)
  2. select dwd_dup_tbl.user_id from dwd_dup_tbl left join dictionary_tbl
  3. on dwd_dup_tbl.user_id = dictionary_tbl.user_id where dwd_dup_tbl.visit_time > '2023-12-10' and dictionary_tbl.user_id is NULL;

In real-world scenarios, Flink connectors can also be employed to write data into Doris.

Assuming dim1, dim3, dim5 represent statistical dimensions of interest to us, create the following table to store aggregated results:

  1. CREATE TABLE `demo`.`dws_agg_tbl` (
  2. `dim1` varchar(50) NOT NULL,
  3. `dim3` varchar(50) NOT NULL,
  4. `dim5` varchar(50) NOT NULL,
  5. `user_id_bitmap` BITMAP BITMAP_UNION NOT NULL,
  6. `pv` BIGINT SUM NOT NULL
  7. ) ENGINE=OLAP
  8. AGGREGATE KEY(`dim1`,`dim3`,`dim5`)
  9. DISTRIBUTED BY HASH(`user_id`) BUCKETS 32
  10. PROPERTIES (
  11. "replication_allocation" = "tag.location.default: 3"
  12. );

Store the result of the data aggregation operations into the aggregation result table:

  1. insert into dws_tbl
  2. select dwd_dup_tbl.dim1, dwd_dup_tbl.dim3, dwd_dup_tbl.dim5, BITMAP_UNION(TO_BITMAP(dictionary_tbl.aid)), COUNT(1)
  3. from dwd_dup_tbl INNER JOIN dictionary_tbl on dwd_dup_tbl.user_id = dictionary_tbl.user_id;

Perform UV and PV queries using the following statement:

  1. select dim1, dim3, dim5, user_id_bitmap as uv, pv from dws_agg_tbl;

Efficient Pagination

When displaying data on a page, pagination is often necessary. Traditional pagination typically involves using limit, offset, and order by in SQL queries. For instance, consider the following business table intended for display:

  1. CREATE TABLE `demo`.`records_tbl` (
  2. `key` int(11) NOT NULL COMMENT "",
  3. `name` varchar(26) NOT NULL COMMENT "",
  4. `address` varchar(41) NOT NULL COMMENT "",
  5. `city` varchar(11) NOT NULL COMMENT "",
  6. `nation` varchar(16) NOT NULL COMMENT "",
  7. `region` varchar(13) NOT NULL COMMENT "",
  8. `phone` varchar(16) NOT NULL COMMENT "",
  9. `mktsegment` varchar(11) NOT NULL COMMENT ""
  10. ) DUPLICATE KEY (`key`, `name`)
  11. DISTRIBUTED BY HASH(`key`) BUCKETS 10
  12. PROPERTIES (
  13. "replication_allocation" = "tag.location.default: 3"
  14. );

Assuming 100 records are displayed per page in pagination. To fetch the first page’s data, the following SQL query can be used:

  1. select * from records_tbl order by `key`, `name` limit 100;

Fetching the data for the second page can be accomplished by:

  1. select * from records_tbl order by `key`, `name` limit 100, offset 100;

However, when performing deep pagination queries (with large offsets), even if the actual required data rows are few, this method still reads all data into memory for full sorting before subsequent processing, which is quite inefficient. Using an auto-incrementa column assigns a unique value to each row, allowing the use of where unique_value > x limit y to filter a significant amount of data beforehand, making pagination more efficient.

Continuing with the aforementioned business table, an auto-increment column is added to the table to give each row a unique identifier:

  1. CREATE TABLE `demo`.`records_tbl2` (
  2. `key` int(11) NOT NULL COMMENT "",
  3. `name` varchar(26) NOT NULL COMMENT "",
  4. `address` varchar(41) NOT NULL COMMENT "",
  5. `city` varchar(11) NOT NULL COMMENT "",
  6. `nation` varchar(16) NOT NULL COMMENT "",
  7. `region` varchar(13) NOT NULL COMMENT "",
  8. `phone` varchar(16) NOT NULL COMMENT "",
  9. `mktsegment` varchar(11) NOT NULL COMMENT "",
  10. `unique_value` BIGINT NOT NULL AUTO_INCREMENT
  11. ) DUPLICATE KEY (`key`, `name`)
  12. DISTRIBUTED BY HASH(`key`) BUCKETS 10
  13. PROPERTIES (
  14. "replication_num" = "3"
  15. );

For pagination displaying 100 records per page, to fetch the first page’s data, the following SQL query can be used:

  1. select * from records_tbl2 order by unique_value limit 100;

By recording the maximum value of unique_value in the returned results, let’s assume it’s 99. The following query can then fetch data for the second page:

  1. select * from records_tbl2 where unique_value > 99 order by unique_value limit 100;

If directly querying contents from a later page and it’s inconvenient to directly obtain the maximum value of unique_value from the preceding page’s data (for instance, directly obtaining contents from the 101st page), the following query can be used:

  1. select key, name, address, city, nation, region, phone, mktsegment
  2. from records_tbl2, (select unique_value as max_value from records_tbl2 order by unique_value limit 1 offset 9999) as previous_data
  3. where records_tbl2.unique_value > previous_data.max_value
  4. order by records_tbl2.unique_value limit 100;