SQL query translation

SQL query translation - 图1info

Apache Druid supports two query languages: Druid SQL and native queries. This document describes the Druid SQL language.

Druid uses Apache Calcite to parse and plan SQL queries. Druid translates SQL statements into its native JSON-based query language. In general, the slight overhead of translating SQL on the Broker is the only minor performance penalty to using Druid SQL compared to native queries.

This topic includes best practices and tools to help you achieve good performance and minimize the impact of translation.

Best practices

Consider the following non-exhaustive list of best practices when looking into performance implications of translating Druid SQL queries to native queries.

  1. If you wrote a filter on the primary time column __time, make sure it is being correctly translated to an "intervals" filter, as described in the Time filters section below. If not, you may need to change the way you write the filter.

  2. Try to avoid subqueries underneath joins: they affect both performance and scalability. This includes implicit subqueries generated by conditions on mismatched types, and implicit subqueries generated by conditions that use expressions to refer to the right-hand side.

  3. Currently, Druid does not support pushing down predicates (condition and filter) past a Join (i.e. into Join’s children). Druid only supports pushing predicates into the join if they originated from above the join. Hence, the location of predicates and filters in your Druid SQL is very important. Also, as a result of this, comma joins should be avoided.

  4. Read through the Query execution page to understand how various types of native queries will be executed.

  5. Be careful when interpreting EXPLAIN PLAN output, and use request logging if in doubt. Request logs will show the exact native query that was run. See the next section for more details.

  6. If you encounter a query that could be planned better, feel free to raise an issue on GitHub. A reproducible test case is always appreciated.

Interpreting EXPLAIN PLAN output

The EXPLAIN PLAN functionality can help you understand how a given SQL query will be translated to native. EXPLAIN PLAN statements return:

  • a PLAN column that contains a JSON array of native queries that Druid will run
  • a RESOURCES column that describes the resources used in the query
  • an ATTRIBUTES column that describes the attributes of the query, including:
    • statementType: the SQL statement type
    • targetDataSource: the target datasource in an INSERT or REPLACE statement
    • partitionedBy: the time-based partitioning granularity in an INSERT or REPLACE statement
    • clusteredBy: the clustering columns in an INSERT or REPLACE statement
    • replaceTimeChunks: the time chunks in a REPLACE statement

Example 1: EXPLAIN PLAN for a SELECT query on the wikipedia datasource:

Show the query

  1. EXPLAIN PLAN FOR
  2. SELECT
  3. channel,
  4. COUNT(*)
  5. FROM wikipedia
  6. WHERE channel IN (SELECT page FROM wikipedia GROUP BY page ORDER BY COUNT(*) DESC LIMIT 10)
  7. GROUP BY channel

The above EXPLAIN PLAN query returns the following result:

Show the result

  1. [
  2. [
  3. {
  4. "query": {
  5. "queryType": "topN",
  6. "dataSource": {
  7. "type": "join",
  8. "left": {
  9. "type": "table",
  10. "name": "wikipedia"
  11. },
  12. "right": {
  13. "type": "query",
  14. "query": {
  15. "queryType": "groupBy",
  16. "dataSource": {
  17. "type": "table",
  18. "name": "wikipedia"
  19. },
  20. "intervals": {
  21. "type": "intervals",
  22. "intervals": [
  23. "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z"
  24. ]
  25. },
  26. "granularity": {
  27. "type": "all"
  28. },
  29. "dimensions": [
  30. {
  31. "type": "default",
  32. "dimension": "page",
  33. "outputName": "d0",
  34. "outputType": "STRING"
  35. }
  36. ],
  37. "aggregations": [
  38. {
  39. "type": "count",
  40. "name": "a0"
  41. }
  42. ],
  43. "limitSpec": {
  44. "type": "default",
  45. "columns": [
  46. {
  47. "dimension": "a0",
  48. "direction": "descending",
  49. "dimensionOrder": {
  50. "type": "numeric"
  51. }
  52. }
  53. ],
  54. "limit": 10
  55. },
  56. "context": {
  57. "sqlOuterLimit": 101,
  58. "sqlQueryId": "ee616a36-c30c-4eae-af00-245127956e42",
  59. "useApproximateCountDistinct": false,
  60. "useApproximateTopN": false
  61. }
  62. }
  63. },
  64. "rightPrefix": "j0.",
  65. "condition": "(\"channel\" == \"j0.d0\")",
  66. "joinType": "INNER"
  67. },
  68. "dimension": {
  69. "type": "default",
  70. "dimension": "channel",
  71. "outputName": "d0",
  72. "outputType": "STRING"
  73. },
  74. "metric": {
  75. "type": "dimension",
  76. "ordering": {
  77. "type": "lexicographic"
  78. }
  79. },
  80. "threshold": 101,
  81. "intervals": {
  82. "type": "intervals",
  83. "intervals": [
  84. "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z"
  85. ]
  86. },
  87. "granularity": {
  88. "type": "all"
  89. },
  90. "aggregations": [
  91. {
  92. "type": "count",
  93. "name": "a0"
  94. }
  95. ],
  96. "context": {
  97. "sqlOuterLimit": 101,
  98. "sqlQueryId": "ee616a36-c30c-4eae-af00-245127956e42",
  99. "useApproximateCountDistinct": false,
  100. "useApproximateTopN": false
  101. }
  102. },
  103. "signature": [
  104. {
  105. "name": "d0",
  106. "type": "STRING"
  107. },
  108. {
  109. "name": "a0",
  110. "type": "LONG"
  111. }
  112. ],
  113. "columnMappings": [
  114. {
  115. "queryColumn": "d0",
  116. "outputColumn": "channel"
  117. },
  118. {
  119. "queryColumn": "a0",
  120. "outputColumn": "EXPR$1"
  121. }
  122. ]
  123. }
  124. ],
  125. [
  126. {
  127. "name": "wikipedia",
  128. "type": "DATASOURCE"
  129. }
  130. ],
  131. {
  132. "statementType": "SELECT"
  133. }
  134. ]

Example 2: EXPLAIN PLAN for an INSERT query that inserts data into the wikipedia datasource:

Show the query

  1. EXPLAIN PLAN FOR
  2. INSERT INTO wikipedia2
  3. SELECT
  4. TIME_PARSE("timestamp") AS __time,
  5. namespace,
  6. cityName,
  7. countryName,
  8. regionIsoCode,
  9. metroCode,
  10. countryIsoCode,
  11. regionName
  12. FROM TABLE(
  13. EXTERN(
  14. '{"type":"http","uris":["https://druid.apache.org/data/wikipedia.json.gz"]}',
  15. '{"type":"json"}',
  16. '[{"name":"timestamp","type":"string"},{"name":"namespace","type":"string"},{"name":"cityName","type":"string"},{"name":"countryName","type":"string"},{"name":"regionIsoCode","type":"string"},{"name":"metroCode","type":"long"},{"name":"countryIsoCode","type":"string"},{"name":"regionName","type":"string"}]'
  17. )
  18. )
  19. PARTITIONED BY ALL

The above EXPLAIN PLAN returns the following result:

Show the result

  1. [
  2. [
  3. {
  4. "query": {
  5. "queryType": "scan",
  6. "dataSource": {
  7. "type": "external",
  8. "inputSource": {
  9. "type": "http",
  10. "uris": [
  11. "https://druid.apache.org/data/wikipedia.json.gz"
  12. ]
  13. },
  14. "inputFormat": {
  15. "type": "json",
  16. "keepNullColumns": false,
  17. "assumeNewlineDelimited": false,
  18. "useJsonNodeReader": false
  19. },
  20. "signature": [
  21. {
  22. "name": "timestamp",
  23. "type": "STRING"
  24. },
  25. {
  26. "name": "namespace",
  27. "type": "STRING"
  28. },
  29. {
  30. "name": "cityName",
  31. "type": "STRING"
  32. },
  33. {
  34. "name": "countryName",
  35. "type": "STRING"
  36. },
  37. {
  38. "name": "regionIsoCode",
  39. "type": "STRING"
  40. },
  41. {
  42. "name": "metroCode",
  43. "type": "LONG"
  44. },
  45. {
  46. "name": "countryIsoCode",
  47. "type": "STRING"
  48. },
  49. {
  50. "name": "regionName",
  51. "type": "STRING"
  52. }
  53. ]
  54. },
  55. "intervals": {
  56. "type": "intervals",
  57. "intervals": [
  58. "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z"
  59. ]
  60. },
  61. "virtualColumns": [
  62. {
  63. "type": "expression",
  64. "name": "v0",
  65. "expression": "timestamp_parse(\"timestamp\",null,'UTC')",
  66. "outputType": "LONG"
  67. }
  68. ],
  69. "resultFormat": "compactedList",
  70. "columns": [
  71. "cityName",
  72. "countryIsoCode",
  73. "countryName",
  74. "metroCode",
  75. "namespace",
  76. "regionIsoCode",
  77. "regionName",
  78. "v0"
  79. ],
  80. "legacy": false,
  81. "context": {
  82. "finalizeAggregations": false,
  83. "forceExpressionVirtualColumns": true,
  84. "groupByEnableMultiValueUnnesting": false,
  85. "maxNumTasks": 5,
  86. "multiStageQuery": true,
  87. "queryId": "42e3de2b-daaf-40f9-a0e7-2c6184529ea3",
  88. "scanSignature": "[{\"name\":\"cityName\",\"type\":\"STRING\"},{\"name\":\"countryIsoCode\",\"type\":\"STRING\"},{\"name\":\"countryName\",\"type\":\"STRING\"},{\"name\":\"metroCode\",\"type\":\"LONG\"},{\"name\":\"namespace\",\"type\":\"STRING\"},{\"name\":\"regionIsoCode\",\"type\":\"STRING\"},{\"name\":\"regionName\",\"type\":\"STRING\"},{\"name\":\"v0\",\"type\":\"LONG\"}]",
  89. "sqlInsertSegmentGranularity": "{\"type\":\"all\"}",
  90. "sqlQueryId": "42e3de2b-daaf-40f9-a0e7-2c6184529ea3",
  91. "useNativeQueryExplain": true
  92. },
  93. "granularity": {
  94. "type": "all"
  95. }
  96. },
  97. "signature": [
  98. {
  99. "name": "v0",
  100. "type": "LONG"
  101. },
  102. {
  103. "name": "namespace",
  104. "type": "STRING"
  105. },
  106. {
  107. "name": "cityName",
  108. "type": "STRING"
  109. },
  110. {
  111. "name": "countryName",
  112. "type": "STRING"
  113. },
  114. {
  115. "name": "regionIsoCode",
  116. "type": "STRING"
  117. },
  118. {
  119. "name": "metroCode",
  120. "type": "LONG"
  121. },
  122. {
  123. "name": "countryIsoCode",
  124. "type": "STRING"
  125. },
  126. {
  127. "name": "regionName",
  128. "type": "STRING"
  129. }
  130. ],
  131. "columnMappings": [
  132. {
  133. "queryColumn": "v0",
  134. "outputColumn": "__time"
  135. },
  136. {
  137. "queryColumn": "namespace",
  138. "outputColumn": "namespace"
  139. },
  140. {
  141. "queryColumn": "cityName",
  142. "outputColumn": "cityName"
  143. },
  144. {
  145. "queryColumn": "countryName",
  146. "outputColumn": "countryName"
  147. },
  148. {
  149. "queryColumn": "regionIsoCode",
  150. "outputColumn": "regionIsoCode"
  151. },
  152. {
  153. "queryColumn": "metroCode",
  154. "outputColumn": "metroCode"
  155. },
  156. {
  157. "queryColumn": "countryIsoCode",
  158. "outputColumn": "countryIsoCode"
  159. },
  160. {
  161. "queryColumn": "regionName",
  162. "outputColumn": "regionName"
  163. }
  164. ]
  165. }
  166. ],
  167. [
  168. {
  169. "name": "EXTERNAL",
  170. "type": "EXTERNAL"
  171. },
  172. {
  173. "name": "wikipedia",
  174. "type": "DATASOURCE"
  175. }
  176. ],
  177. {
  178. "statementType": "INSERT",
  179. "targetDataSource": "wikipedia",
  180. "partitionedBy": {
  181. "type": "all"
  182. }
  183. }
  184. ]

Example 3: EXPLAIN PLAN for a REPLACE query that replaces all the data in the wikipedia datasource with a DAY time partitioning, and cityName and countryName as the clustering columns:

Show the query

  1. EXPLAIN PLAN FOR
  2. REPLACE INTO wikipedia
  3. OVERWRITE ALL
  4. SELECT
  5. TIME_PARSE("timestamp") AS __time,
  6. namespace,
  7. cityName,
  8. countryName,
  9. regionIsoCode,
  10. metroCode,
  11. countryIsoCode,
  12. regionName
  13. FROM TABLE(
  14. EXTERN(
  15. '{"type":"http","uris":["https://druid.apache.org/data/wikipedia.json.gz"]}',
  16. '{"type":"json"}',
  17. '[{"name":"timestamp","type":"string"},{"name":"namespace","type":"string"},{"name":"cityName","type":"string"},{"name":"countryName","type":"string"},{"name":"regionIsoCode","type":"string"},{"name":"metroCode","type":"long"},{"name":"countryIsoCode","type":"string"},{"name":"regionName","type":"string"}]'
  18. )
  19. )
  20. PARTITIONED BY DAY
  21. CLUSTERED BY cityName, countryName

The above EXPLAIN PLAN query returns the following result:

Show the result

  1. [
  2. [
  3. {
  4. "query": {
  5. "queryType": "scan",
  6. "dataSource": {
  7. "type": "external",
  8. "inputSource": {
  9. "type": "http",
  10. "uris": [
  11. "https://druid.apache.org/data/wikipedia.json.gz"
  12. ]
  13. },
  14. "inputFormat": {
  15. "type": "json",
  16. "keepNullColumns": false,
  17. "assumeNewlineDelimited": false,
  18. "useJsonNodeReader": false
  19. },
  20. "signature": [
  21. {
  22. "name": "timestamp",
  23. "type": "STRING"
  24. },
  25. {
  26. "name": "namespace",
  27. "type": "STRING"
  28. },
  29. {
  30. "name": "cityName",
  31. "type": "STRING"
  32. },
  33. {
  34. "name": "countryName",
  35. "type": "STRING"
  36. },
  37. {
  38. "name": "regionIsoCode",
  39. "type": "STRING"
  40. },
  41. {
  42. "name": "metroCode",
  43. "type": "LONG"
  44. },
  45. {
  46. "name": "countryIsoCode",
  47. "type": "STRING"
  48. },
  49. {
  50. "name": "regionName",
  51. "type": "STRING"
  52. }
  53. ]
  54. },
  55. "intervals": {
  56. "type": "intervals",
  57. "intervals": [
  58. "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z"
  59. ]
  60. },
  61. "virtualColumns": [
  62. {
  63. "type": "expression",
  64. "name": "v0",
  65. "expression": "timestamp_parse(\"timestamp\",null,'UTC')",
  66. "outputType": "LONG"
  67. }
  68. ],
  69. "resultFormat": "compactedList",
  70. "columns": [
  71. "cityName",
  72. "countryIsoCode",
  73. "countryName",
  74. "metroCode",
  75. "namespace",
  76. "regionIsoCode",
  77. "regionName",
  78. "v0"
  79. ],
  80. "legacy": false,
  81. "context": {
  82. "finalizeAggregations": false,
  83. "groupByEnableMultiValueUnnesting": false,
  84. "maxNumTasks": 5,
  85. "queryId": "d88e0823-76d4-40d9-a1a7-695c8577b79f",
  86. "scanSignature": "[{\"name\":\"cityName\",\"type\":\"STRING\"},{\"name\":\"countryIsoCode\",\"type\":\"STRING\"},{\"name\":\"countryName\",\"type\":\"STRING\"},{\"name\":\"metroCode\",\"type\":\"LONG\"},{\"name\":\"namespace\",\"type\":\"STRING\"},{\"name\":\"regionIsoCode\",\"type\":\"STRING\"},{\"name\":\"regionName\",\"type\":\"STRING\"},{\"name\":\"v0\",\"type\":\"LONG\"}]",
  87. "sqlInsertSegmentGranularity": "\"DAY\"",
  88. "sqlQueryId": "d88e0823-76d4-40d9-a1a7-695c8577b79f",
  89. "sqlReplaceTimeChunks": "all"
  90. },
  91. "granularity": {
  92. "type": "all"
  93. }
  94. },
  95. "signature": [
  96. {
  97. "name": "v0",
  98. "type": "LONG"
  99. },
  100. {
  101. "name": "namespace",
  102. "type": "STRING"
  103. },
  104. {
  105. "name": "cityName",
  106. "type": "STRING"
  107. },
  108. {
  109. "name": "countryName",
  110. "type": "STRING"
  111. },
  112. {
  113. "name": "regionIsoCode",
  114. "type": "STRING"
  115. },
  116. {
  117. "name": "metroCode",
  118. "type": "LONG"
  119. },
  120. {
  121. "name": "countryIsoCode",
  122. "type": "STRING"
  123. },
  124. {
  125. "name": "regionName",
  126. "type": "STRING"
  127. }
  128. ],
  129. "columnMappings": [
  130. {
  131. "queryColumn": "v0",
  132. "outputColumn": "__time"
  133. },
  134. {
  135. "queryColumn": "namespace",
  136. "outputColumn": "namespace"
  137. },
  138. {
  139. "queryColumn": "cityName",
  140. "outputColumn": "cityName"
  141. },
  142. {
  143. "queryColumn": "countryName",
  144. "outputColumn": "countryName"
  145. },
  146. {
  147. "queryColumn": "regionIsoCode",
  148. "outputColumn": "regionIsoCode"
  149. },
  150. {
  151. "queryColumn": "metroCode",
  152. "outputColumn": "metroCode"
  153. },
  154. {
  155. "queryColumn": "countryIsoCode",
  156. "outputColumn": "countryIsoCode"
  157. },
  158. {
  159. "queryColumn": "regionName",
  160. "outputColumn": "regionName"
  161. }
  162. ]
  163. }
  164. ],
  165. [
  166. {
  167. "name": "EXTERNAL",
  168. "type": "EXTERNAL"
  169. },
  170. {
  171. "name": "wikipedia",
  172. "type": "DATASOURCE"
  173. }
  174. ],
  175. {
  176. "statementType": "REPLACE",
  177. "targetDataSource": "wikipedia",
  178. "partitionedBy": "DAY",
  179. "clusteredBy": ["cityName","countryName"],
  180. "replaceTimeChunks": "all"
  181. }
  182. ]

In this case the JOIN operator gets translated to a join datasource. See the Join translation section for more details about how this works.

We can see this for ourselves using Druid’s request logging feature. After enabling logging and running this query, we can see that it actually runs as the following native query.

  1. {
  2. "queryType": "groupBy",
  3. "dataSource": {
  4. "type": "join",
  5. "left": "wikipedia",
  6. "right": {
  7. "type": "query",
  8. "query": {
  9. "queryType": "topN",
  10. "dataSource": "wikipedia",
  11. "dimension": {"type": "default", "dimension": "page", "outputName": "d0"},
  12. "metric": {"type": "numeric", "metric": "a0"},
  13. "threshold": 10,
  14. "intervals": "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z",
  15. "granularity": "all",
  16. "aggregations": [
  17. { "type": "count", "name": "a0"}
  18. ]
  19. }
  20. },
  21. "rightPrefix": "j0.",
  22. "condition": "(\"page\" == \"j0.d0\")",
  23. "joinType": "INNER"
  24. },
  25. "intervals": "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z",
  26. "granularity": "all",
  27. "dimensions": [
  28. {"type": "default", "dimension": "channel", "outputName": "d0"}
  29. ],
  30. "aggregations": [
  31. { "type": "count", "name": "a0"}
  32. ]
  33. }

Query types

Druid SQL uses four different native query types.

  • Scan is used for queries that do not aggregate—no GROUP BY, no DISTINCT.

  • Timeseries is used for queries that GROUP BY FLOOR(__time TO unit) or TIME_FLOOR(__time, period), have no other grouping expressions, no HAVING clause, no nesting, and either no ORDER BY, or an ORDER BY that orders by same expression as present in GROUP BY. It also uses Timeseries for “grand total” queries that have aggregation functions but no GROUP BY. This query type takes advantage of the fact that Druid segments are sorted by time.

  • TopN is used by default for queries that group by a single expression, do have ORDER BY and LIMIT clauses, do not have HAVING clauses, and are not nested. However, the TopN query type will deliver approximate ranking and results in some cases; if you want to avoid this, set “useApproximateTopN” to “false”. TopN results are always computed in memory. See the TopN documentation for more details.

  • GroupBy is used for all other aggregations, including any nested aggregation queries. Druid’s GroupBy is a traditional aggregation engine: it delivers exact results and rankings and supports a wide variety of features. GroupBy aggregates in memory if it can, but it may spill to disk if it doesn’t have enough memory to complete your query. Results are streamed back from data processes through the Broker if you ORDER BY the same expressions in your GROUP BY clause, or if you don’t have an ORDER BY at all. If your query has an ORDER BY referencing expressions that don’t appear in the GROUP BY clause (like aggregation functions) then the Broker will materialize a list of results in memory, up to a max of your LIMIT, if any. See the GroupBy documentation for details about tuning performance and memory use.

Time filters

For all native query types, filters on the __time column will be translated into top-level query “intervals” whenever possible, which allows Druid to use its global time index to quickly prune the set of data that must be scanned. Consider this (non-exhaustive) list of time filters that will be recognized and translated to “intervals”:

  • __time >= TIMESTAMP '2000-01-01 00:00:00' (comparison to absolute time)
  • __time >= CURRENT_TIMESTAMP - INTERVAL '8' HOUR (comparison to relative time)
  • FLOOR(__time TO DAY) = TIMESTAMP '2000-01-01 00:00:00' (specific day)

Refer to the Interpreting EXPLAIN PLAN output section for details on confirming that time filters are being translated as you expect.

Joins

SQL join operators are translated to native join datasources as follows:

  1. Joins that the native layer can handle directly are translated literally, to a join datasource whose left, right, and condition are faithful translations of the original SQL. This includes any SQL join where the right-hand side is a lookup or subquery, and where the condition is an equality where one side is an expression based on the left-hand table, the other side is a simple column reference to the right-hand table, and both sides of the equality are the same data type.

  2. If a join cannot be handled directly by a native join datasource as written, Druid SQL will insert subqueries to make it runnable. For example, foo INNER JOIN bar ON foo.abc = LOWER(bar.def) cannot be directly translated, because there is an expression on the right-hand side instead of a simple column access. A subquery will be inserted that effectively transforms this clause to foo INNER JOIN (SELECT LOWER(def) AS def FROM bar) t ON foo.abc = t.def.

  3. Druid SQL does not currently reorder joins to optimize queries.

Refer to the Interpreting EXPLAIN PLAN output section for details on confirming that joins are being translated as you expect.

Refer to the Query execution page for information about how joins are executed.

Subqueries

Subqueries in SQL are generally translated to native query datasources. Refer to the Query execution page for information about how subqueries are executed.

SQL query translation - 图2info

Note: Subqueries in the WHERE clause, like WHERE col1 IN (SELECT foo FROM ...) are translated to inner joins.

Approximations

Druid SQL will use approximate algorithms in some situations:

  • The COUNT(DISTINCT col) aggregation functions by default uses a variant of HyperLogLog, a fast approximate distinct counting algorithm. Druid SQL will switch to exact distinct counts if you set “useApproximateCountDistinct” to “false”, either through query context or through Broker configuration.

  • GROUP BY queries over a single column with ORDER BY and LIMIT may be executed using the TopN engine, which uses an approximate algorithm. Druid SQL will switch to an exact grouping algorithm if you set “useApproximateTopN” to “false”, either through query context or through Broker configuration.

  • Aggregation functions that are labeled as using sketches or approximations, such as APPROX_COUNT_DISTINCT, are always approximate, regardless of configuration.

A known issue with approximate functions based on data sketches

The APPROX_QUANTILE_DS and DS_QUANTILES_SKETCH functions can fail with an IllegalStateException if one of the sketches for the query hits maxStreamLength: the maximum number of items to store in each sketch. See GitHub issue 11544 for more details. To workaround the issue, increase value of the maximum string length with the approxQuantileDsMaxStreamLength parameter in the query context. Since it is set to 1,000,000,000 by default, you don’t need to override it in most cases. See accuracy information in the DataSketches documentation for how many bytes are required per stream length. This query context parameter is a temporary solution to avoid the known issue. It may be removed in a future release after the bug is fixed.

Unsupported features

Druid does not support all SQL features. In particular, the following features are not supported.

  • JOIN between native datasources (table, lookup, subquery) and system tables.
  • JOIN conditions that are not an equality between expressions from the left- and right-hand sides.
  • JOIN conditions containing a constant value inside the condition.
  • JOIN conditions on a column which contains a multi-value dimension.
  • ORDER BY for a non-aggregating query, except for ORDER BY __time or ORDER BY __time DESC, which are supported. This restriction only applies to non-aggregating queries; you can ORDER BY any column in an aggregating query.
  • DDL and DML.
  • Using Druid-specific functions like TIME_PARSE and APPROX_QUANTILE_DS on system tables.

Additionally, some Druid native query features are not supported by the SQL language. Some unsupported Druid features include: