Arrays

Apache Druid supports SQL standard ARRAY typed columns for VARCHAR, BIGINT, and DOUBLE types (native types ARRAY<STRING>, ARRAY<LONG>, and ARRAY<DOUBLE>). Other more complicated ARRAY types must be stored in nested columns. Druid ARRAY types are distinct from multi-value dimension, which have significantly different behavior than standard arrays.

This document describes inserting, filtering, and grouping behavior for ARRAY typed columns. Refer to the Druid SQL data type documentation and SQL array function reference for additional details about the functions available to use with ARRAY columns and types in SQL.

The following sections describe inserting, filtering, and grouping behavior based on the following example data, which includes 3 array typed columns:

  1. {"timestamp": "2023-01-01T00:00:00", "label": "row1", "arrayString": ["a", "b"], "arrayLong":[1, null,3], "arrayDouble":[1.1, 2.2, null]}
  2. {"timestamp": "2023-01-01T00:00:00", "label": "row2", "arrayString": [null, "b"], "arrayLong":null, "arrayDouble":[999, null, 5.5]}
  3. {"timestamp": "2023-01-01T00:00:00", "label": "row3", "arrayString": [], "arrayLong":[1, 2, 3], "arrayDouble":[null, 2.2, 1.1]}
  4. {"timestamp": "2023-01-01T00:00:00", "label": "row4", "arrayString": ["a", "b"], "arrayLong":[1, 2, 3], "arrayDouble":[]}
  5. {"timestamp": "2023-01-01T00:00:00", "label": "row5", "arrayString": null, "arrayLong":[], "arrayDouble":null}

Ingesting arrays

Native batch and streaming ingestion

When using native batch or streaming ingestion such as with Apache Kafka, arrays can be ingested using the “auto” type dimension schema which is shared with type-aware schema discovery.

When ingesting from TSV or CSV data, you can specify the array delimiters using the listDelimiter field in the inputFormat. JSON data must be formatted as a JSON array to be ingested as an array type. JSON data does not require inputFormat configuration.

The following shows an example dimensionsSpec for native ingestion of the data used in this document:

  1. "dimensions": [
  2. {
  3. "type": "auto",
  4. "name": "label"
  5. },
  6. {
  7. "type": "auto",
  8. "name": "arrayString"
  9. },
  10. {
  11. "type": "auto",
  12. "name": "arrayLong"
  13. },
  14. {
  15. "type": "auto",
  16. "name": "arrayDouble"
  17. }
  18. ],

SQL-based ingestion

Arrays can be inserted with SQL-based ingestion.

Examples

  1. REPLACE INTO "array_example" OVERWRITE ALL
  2. WITH "ext" AS (
  3. SELECT *
  4. FROM TABLE(
  5. EXTERN(
  6. '{"type":"inline","data":"{\"timestamp\": \"2023-01-01T00:00:00\", \"label\": \"row1\", \"arrayString\": [\"a\", \"b\"], \"arrayLong\":[1, null,3], \"arrayDouble\":[1.1, 2.2, null]}\n{\"timestamp\": \"2023-01-01T00:00:00\", \"label\": \"row2\", \"arrayString\": [null, \"b\"], \"arrayLong\":null, \"arrayDouble\":[999, null, 5.5]}\n{\"timestamp\": \"2023-01-01T00:00:00\", \"label\": \"row3\", \"arrayString\": [], \"arrayLong\":[1, 2, 3], \"arrayDouble\":[null, 2.2, 1.1]} \n{\"timestamp\": \"2023-01-01T00:00:00\", \"label\": \"row4\", \"arrayString\": [\"a\", \"b\"], \"arrayLong\":[1, 2, 3], \"arrayDouble\":[]}\n{\"timestamp\": \"2023-01-01T00:00:00\", \"label\": \"row5\", \"arrayString\": null, \"arrayLong\":[], \"arrayDouble\":null}"}',
  7. '{"type":"json"}'
  8. )
  9. ) EXTEND (
  10. "timestamp" VARCHAR,
  11. "label" VARCHAR,
  12. "arrayString" VARCHAR ARRAY,
  13. "arrayLong" BIGINT ARRAY,
  14. "arrayDouble" DOUBLE ARRAY
  15. )
  16. )
  17. SELECT
  18. TIME_PARSE("timestamp") AS "__time",
  19. "label",
  20. "arrayString",
  21. "arrayLong",
  22. "arrayDouble"
  23. FROM "ext"
  24. PARTITIONED BY DAY

Arrays can also be used as GROUP BY keys for rollup:

  1. REPLACE INTO "array_example_rollup" OVERWRITE ALL
  2. WITH "ext" AS (
  3. SELECT *
  4. FROM TABLE(
  5. EXTERN(
  6. '{"type":"inline","data":"{\"timestamp\": \"2023-01-01T00:00:00\", \"label\": \"row1\", \"arrayString\": [\"a\", \"b\"], \"arrayLong\":[1, null,3], \"arrayDouble\":[1.1, 2.2, null]}\n{\"timestamp\": \"2023-01-01T00:00:00\", \"label\": \"row2\", \"arrayString\": [null, \"b\"], \"arrayLong\":null, \"arrayDouble\":[999, null, 5.5]}\n{\"timestamp\": \"2023-01-01T00:00:00\", \"label\": \"row3\", \"arrayString\": [], \"arrayLong\":[1, 2, 3], \"arrayDouble\":[null, 2.2, 1.1]} \n{\"timestamp\": \"2023-01-01T00:00:00\", \"label\": \"row4\", \"arrayString\": [\"a\", \"b\"], \"arrayLong\":[1, 2, 3], \"arrayDouble\":[]}\n{\"timestamp\": \"2023-01-01T00:00:00\", \"label\": \"row5\", \"arrayString\": null, \"arrayLong\":[], \"arrayDouble\":null}"}',
  7. '{"type":"json"}'
  8. )
  9. ) EXTEND (
  10. "timestamp" VARCHAR,
  11. "label" VARCHAR,
  12. "arrayString" VARCHAR ARRAY,
  13. "arrayLong" BIGINT ARRAY,
  14. "arrayDouble" DOUBLE ARRAY
  15. )
  16. )
  17. SELECT
  18. TIME_PARSE("timestamp") AS "__time",
  19. "label",
  20. "arrayString",
  21. "arrayLong",
  22. "arrayDouble",
  23. COUNT(*) as "count"
  24. FROM "ext"
  25. GROUP BY 1,2,3,4,5
  26. PARTITIONED BY DAY

arrayIngestMode

For seamless backwards compatible behavior with Druid versions older than 31, there is an arrayIngestMode query context flag.

When arrayIngestMode is array, SQL ARRAY types are stored using Druid array columns. This is recommended for new tables and the default configuration for Druid 31 and newer.

When arrayIngestMode is mvd (legacy), SQL VARCHAR ARRAY are implicitly wrapped in ARRAY_TO_MV. This causes them to be stored as multi-value strings, using the same STRING column type as regular scalar strings. SQL BIGINT ARRAY and DOUBLE ARRAY cannot be loaded under arrayIngestMode: mvd. This mode is not recommended and will be removed in a future release, but provided for backwards compatibility.

The following table summarizes the differences in SQL ARRAY handling between arrayIngestMode: array and arrayIngestMode: mvd.

SQL typeStored type when arrayIngestMode: array (default)Stored type when arrayIngestMode: mvd
VARCHAR ARRAYARRAY<STRING>multi-value STRING
BIGINT ARRAYARRAY<LONG>not possible (validation error)
DOUBLE ARRAYARRAY<DOUBLE>not possible (validation error)

In either mode, you can explicitly wrap string arrays in ARRAY_TO_MV to cause them to be stored as multi-value strings.

When validating a SQL INSERT or REPLACE statement that contains arrays, Druid checks whether the statement would lead to mixing string arrays and multi-value strings in the same column. If this condition is detected, the statement fails validation unless the column is named under the skipTypeVerification context parameter. This parameter can be either a comma-separated list of column names, or a JSON array in string form. This validation is done to prevent accidentally mixing arrays and multi-value strings in the same column.

Querying arrays

Filtering

All query types, as well as filtered aggregators, can filter on array typed columns. Filters follow these rules for array types:

  • All filters match against the entire array value for the row
  • Native value filters like equality and range match on entire array values, as do SQL constructs that plan into these native filters
  • The IS NULL filter will match rows where the entire array value is null
  • Array specific functions like ARRAY_CONTAINS and ARRAY_OVERLAP follow the behavior specified by those functions
  • All other filters do not directly support ARRAY types and will result in a query error

Example: equality

  1. SELECT *
  2. FROM "array_example"
  3. WHERE arrayLong = ARRAY[1,2,3]
  1. {"__time":"2023-01-01T00:00:00.000Z","label":"row3","arrayString":"[]","arrayLong":"[1,2,3]","arrayDouble":"[null,2.2,1.1]"}
  2. {"__time":"2023-01-01T00:00:00.000Z","label":"row4","arrayString":"[\"a\",\"b\"]","arrayLong":"[1,2,3]","arrayDouble":"[]"}

Example: null

  1. SELECT *
  2. FROM "array_example"
  3. WHERE arrayLong IS NULL
  1. {"__time":"2023-01-01T00:00:00.000Z","label":"row2","arrayString":"[null,\"b\"]","arrayLong":null,"arrayDouble":"[999.0,null,5.5]"}

Example: range

  1. SELECT *
  2. FROM "array_example"
  3. WHERE arrayString >= ARRAY['a','b']
  1. {"__time":"2023-01-01T00:00:00.000Z","label":"row1","arrayString":"[\"a\",\"b\"]","arrayLong":"[1,null,3]","arrayDouble":"[1.1,2.2,null]"}
  2. {"__time":"2023-01-01T00:00:00.000Z","label":"row4","arrayString":"[\"a\",\"b\"]","arrayLong":"[1,2,3]","arrayDouble":"[]"}

Example: ARRAY_CONTAINS

  1. SELECT *
  2. FROM "array_example"
  3. WHERE ARRAY_CONTAINS(arrayString, 'a')
  1. {"__time":"2023-01-01T00:00:00.000Z","label":"row1","arrayString":"[\"a\",\"b\"]","arrayLong":"[1,null,3]","arrayDouble":"[1.1,2.2,null]"}
  2. {"__time":"2023-01-01T00:00:00.000Z","label":"row4","arrayString":"[\"a\",\"b\"]","arrayLong":"[1,2,3]","arrayDouble":"[]"}

Grouping

When grouping on an array with SQL or a native groupBy query, grouping follows standard SQL behavior and groups on the entire array as a single value. The UNNEST function allows grouping on the individual array elements.

Example: SQL grouping query with no filtering

  1. SELECT label, arrayString
  2. FROM "array_example"
  3. GROUP BY 1,2

results in:

  1. {"label":"row1","arrayString":"[\"a\",\"b\"]"}
  2. {"label":"row2","arrayString":"[null,\"b\"]"}
  3. {"label":"row3","arrayString":"[]"}
  4. {"label":"row4","arrayString":"[\"a\",\"b\"]"}
  5. {"label":"row5","arrayString":null}

Example: SQL grouping query with a filter

  1. SELECT label, arrayString
  2. FROM "array_example"
  3. WHERE arrayLong = ARRAY[1,2,3]
  4. GROUP BY 1,2

results:

  1. {"label":"row3","arrayString":"[]"}
  2. {"label":"row4","arrayString":"[\"a\",\"b\"]"}

Example: UNNEST

  1. SELECT label, strings
  2. FROM "array_example" CROSS JOIN UNNEST(arrayString) as u(strings)
  3. GROUP BY 1,2

results:

  1. {"label":"row1","strings":"a"}
  2. {"label":"row1","strings":"b"}
  3. {"label":"row2","strings":null}
  4. {"label":"row2","strings":"b"}
  5. {"label":"row4","strings":"a"}
  6. {"label":"row4","strings":"b"}

Differences between arrays and multi-value dimensions

Avoid confusing string arrays with multi-value dimensions. Arrays and multi-value dimensions are stored in different column types, and query behavior is different. You can use the functions MV_TO_ARRAY and ARRAY_TO_MV to convert between the two if needed. In general, we recommend using arrays whenever possible, since they are a newer and more powerful feature and have SQL compliant behavior.

Use care during ingestion to ensure you get the type you want.

To get arrays when performing an ingestion using JSON ingestion specs, such as native batch or streaming ingestion such as with Apache Kafka, use dimension type auto or enable useSchemaDiscovery. When performing a SQL-based ingestion, write a query that generates arrays. Arrays may contain strings or numbers.

To get multi-value dimensions when performing an ingestion using JSON ingestion specs, use dimension type string and do not enable useSchemaDiscovery. When performing a SQL-based ingestion, wrap arrays in ARRAY_TO_MV, which ensures you get multi-value dimensions. Multi-value dimensions can only contain strings.

You can tell which type you have by checking the INFORMATION_SCHEMA.COLUMNS table, using a query like:

  1. SELECT COLUMN_NAME, DATA_TYPE
  2. FROM INFORMATION_SCHEMA.COLUMNS
  3. WHERE TABLE_NAME = 'mytable'

Arrays are type ARRAY, multi-value strings are type VARCHAR.