SQL-based ingestion API

SQL-based ingestion - 图1info

This page describes SQL-based batch ingestion using the druid-multi-stage-query extension, new in Druid 24.0. Refer to the ingestion methods table to determine which ingestion method is right for you.

The Query view in the web console provides a friendly experience for the multi-stage query task engine (MSQ task engine) and multi-stage query architecture. We recommend using the web console if you don’t need a programmatic interface.

When using the API for the MSQ task engine, the action you want to take determines the endpoint you use:

  • /druid/v2/sql/task: Submit a query for ingestion.
  • /druid/indexer/v1/task: Interact with a query, including getting its status or details, or canceling the query. This page describes a few of the Overlord Task APIs that you can use with the MSQ task engine. For information about Druid APIs, see the API reference for Druid.

In this topic, http://ROUTER_IP:ROUTER_PORT is a placeholder for your Router service address and port. Replace it with the information for your deployment. For example, use http://localhost:8888 for quickstart deployments.

Submit a query

Submits queries to the MSQ task engine.

The /druid/v2/sql/task endpoint accepts the following:

  • SQL requests in the JSON-over-HTTP form using the query, context, and parameters fields. The endpoint ignores the resultFormat, header, typesHeader, and sqlTypesHeader fields.
  • INSERT and REPLACE statements.
  • SELECT queries (experimental feature). SELECT query results are collected from workers by the controller, and written into the task report as an array of arrays. The behavior and result format of plain SELECT queries (without INSERT or REPLACE) is subject to change.

URL

POST /druid/v2/sql/task

Responses

  • 200 SUCCESS
  • 400 BAD REQUEST
  • 500 INTERNAL SERVER ERROR

Successfully submitted query

Error thrown due to bad query. Returns a JSON object detailing the error with the following format:

  1. {
  2. "error": "A well-defined error code.",
  3. "errorMessage": "A message with additional details about the error.",
  4. "errorClass": "Class of exception that caused this error.",
  5. "host": "The host on which the error occurred."
  6. }

Request not sent due to unexpected conditions. Returns a JSON object detailing the error with the following format:

  1. {
  2. "error": "A well-defined error code.",
  3. "errorMessage": "A message with additional details about the error.",
  4. "errorClass": "Class of exception that caused this error.",
  5. "host": "The host on which the error occurred."
  6. }

Sample request

The following example shows a query that fetches data from an external JSON source and inserts it into a table named wikipedia.

  • HTTP
  • cURL
  • Python
  1. POST /druid/v2/sql/task HTTP/1.1
  2. Host: http://ROUTER_IP:ROUTER_PORT
  3. Content-Type: application/json
  4. {
  5. "query": "INSERT INTO wikipedia\nSELECT\n TIME_PARSE(\"timestamp\") AS __time,\n *\nFROM TABLE(\n EXTERN(\n '{\"type\": \"http\", \"uris\": [\"https://druid.apache.org/data/wikipedia.json.gz\"]}',\n '{\"type\": \"json\"}',\n '[{\"name\": \"added\", \"type\": \"long\"}, {\"name\": \"channel\", \"type\": \"string\"}, {\"name\": \"cityName\", \"type\": \"string\"}, {\"name\": \"comment\", \"type\": \"string\"}, {\"name\": \"commentLength\", \"type\": \"long\"}, {\"name\": \"countryIsoCode\", \"type\": \"string\"}, {\"name\": \"countryName\", \"type\": \"string\"}, {\"name\": \"deleted\", \"type\": \"long\"}, {\"name\": \"delta\", \"type\": \"long\"}, {\"name\": \"deltaBucket\", \"type\": \"string\"}, {\"name\": \"diffUrl\", \"type\": \"string\"}, {\"name\": \"flags\", \"type\": \"string\"}, {\"name\": \"isAnonymous\", \"type\": \"string\"}, {\"name\": \"isMinor\", \"type\": \"string\"}, {\"name\": \"isNew\", \"type\": \"string\"}, {\"name\": \"isRobot\", \"type\": \"string\"}, {\"name\": \"isUnpatrolled\", \"type\": \"string\"}, {\"name\": \"metroCode\", \"type\": \"string\"}, {\"name\": \"namespace\", \"type\": \"string\"}, {\"name\": \"page\", \"type\": \"string\"}, {\"name\": \"regionIsoCode\", \"type\": \"string\"}, {\"name\": \"regionName\", \"type\": \"string\"}, {\"name\": \"timestamp\", \"type\": \"string\"}, {\"name\": \"user\", \"type\": \"string\"}]'\n )\n)\nPARTITIONED BY DAY",
  6. "context": {
  7. "maxNumTasks": 3
  8. }
  9. }
  1. curl --location --request POST 'http://ROUTER_IP:ROUTER_PORT/druid/v2/sql/task' \
  2. --header 'Content-Type: application/json' \
  3. --data-raw '{
  4. "query": "INSERT INTO wikipedia\nSELECT\n TIME_PARSE(\"timestamp\") AS __time,\n *\nFROM TABLE(\n EXTERN(\n '\''{\"type\": \"http\", \"uris\": [\"https://druid.apache.org/data/wikipedia.json.gz\"]}'\'',\n '\''{\"type\": \"json\"}'\'',\n '\''[{\"name\": \"added\", \"type\": \"long\"}, {\"name\": \"channel\", \"type\": \"string\"}, {\"name\": \"cityName\", \"type\": \"string\"}, {\"name\": \"comment\", \"type\": \"string\"}, {\"name\": \"commentLength\", \"type\": \"long\"}, {\"name\": \"countryIsoCode\", \"type\": \"string\"}, {\"name\": \"countryName\", \"type\": \"string\"}, {\"name\": \"deleted\", \"type\": \"long\"}, {\"name\": \"delta\", \"type\": \"long\"}, {\"name\": \"deltaBucket\", \"type\": \"string\"}, {\"name\": \"diffUrl\", \"type\": \"string\"}, {\"name\": \"flags\", \"type\": \"string\"}, {\"name\": \"isAnonymous\", \"type\": \"string\"}, {\"name\": \"isMinor\", \"type\": \"string\"}, {\"name\": \"isNew\", \"type\": \"string\"}, {\"name\": \"isRobot\", \"type\": \"string\"}, {\"name\": \"isUnpatrolled\", \"type\": \"string\"}, {\"name\": \"metroCode\", \"type\": \"string\"}, {\"name\": \"namespace\", \"type\": \"string\"}, {\"name\": \"page\", \"type\": \"string\"}, {\"name\": \"regionIsoCode\", \"type\": \"string\"}, {\"name\": \"regionName\", \"type\": \"string\"}, {\"name\": \"timestamp\", \"type\": \"string\"}, {\"name\": \"user\", \"type\": \"string\"}]'\''\n )\n)\nPARTITIONED BY DAY",
  5. "context": {
  6. "maxNumTasks": 3
  7. }
  8. }'
  1. import json
  2. import requests
  3. url = "http://ROUTER_IP:ROUTER_PORT/druid/v2/sql/task"
  4. payload = json.dumps({
  5. "query": "INSERT INTO wikipedia\nSELECT\n TIME_PARSE(\"timestamp\") AS __time,\n *\nFROM TABLE(\n EXTERN(\n '{\"type\": \"http\", \"uris\": [\"https://druid.apache.org/data/wikipedia.json.gz\"]}',\n '{\"type\": \"json\"}',\n '[{\"name\": \"added\", \"type\": \"long\"}, {\"name\": \"channel\", \"type\": \"string\"}, {\"name\": \"cityName\", \"type\": \"string\"}, {\"name\": \"comment\", \"type\": \"string\"}, {\"name\": \"commentLength\", \"type\": \"long\"}, {\"name\": \"countryIsoCode\", \"type\": \"string\"}, {\"name\": \"countryName\", \"type\": \"string\"}, {\"name\": \"deleted\", \"type\": \"long\"}, {\"name\": \"delta\", \"type\": \"long\"}, {\"name\": \"deltaBucket\", \"type\": \"string\"}, {\"name\": \"diffUrl\", \"type\": \"string\"}, {\"name\": \"flags\", \"type\": \"string\"}, {\"name\": \"isAnonymous\", \"type\": \"string\"}, {\"name\": \"isMinor\", \"type\": \"string\"}, {\"name\": \"isNew\", \"type\": \"string\"}, {\"name\": \"isRobot\", \"type\": \"string\"}, {\"name\": \"isUnpatrolled\", \"type\": \"string\"}, {\"name\": \"metroCode\", \"type\": \"string\"}, {\"name\": \"namespace\", \"type\": \"string\"}, {\"name\": \"page\", \"type\": \"string\"}, {\"name\": \"regionIsoCode\", \"type\": \"string\"}, {\"name\": \"regionName\", \"type\": \"string\"}, {\"name\": \"timestamp\", \"type\": \"string\"}, {\"name\": \"user\", \"type\": \"string\"}]'\n )\n)\nPARTITIONED BY DAY",
  6. "context": {
  7. "maxNumTasks": 3
  8. }
  9. })
  10. headers = {
  11. 'Content-Type': 'application/json'
  12. }
  13. response = requests.post(url, headers=headers, data=payload)
  14. print(response.text)

Sample response

View the response

  1. {
  2. "taskId": "query-f795a235-4dc7-4fef-abac-3ae3f9686b79",
  3. "state": "RUNNING",
  4. }

Response fields

FieldDescription
taskIdController task ID. You can use Druid’s standard Tasks API to interact with this controller task.
stateInitial state for the query.

Get the status for a query task

Retrieves the status of a query task. It returns a JSON object with the task’s status code, runner status, task type, datasource, and other relevant metadata.

URL

GET /druid/indexer/v1/task/{taskId}/status

Responses

  • 200 SUCCESS
  • 404 NOT FOUND

Successfully retrieved task status

Cannot find task with ID


Sample request

The following example shows how to retrieve the status of a task with the ID query-3dc0c45d-34d7-4b15-86c9-cdb2d3ebfc4e.

  • HTTP
  • cURL
  • Python
  1. GET /druid/indexer/v1/task/query-3dc0c45d-34d7-4b15-86c9-cdb2d3ebfc4e/status HTTP/1.1
  2. Host: http://ROUTER_IP:ROUTER_PORT
  1. curl --location --request GET 'http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/task/query-3dc0c45d-34d7-4b15-86c9-cdb2d3ebfc4e/status'
  1. import requests
  2. url = "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/task/query-3dc0c45d-34d7-4b15-86c9-cdb2d3ebfc4e/status"
  3. payload={}
  4. headers = {}
  5. response = requests.post(url, headers=headers, data=payload)
  6. print(response.text)
  7. print(response.text)

Sample response

View the response

  1. {
  2. "task": "query-3dc0c45d-34d7-4b15-86c9-cdb2d3ebfc4e",
  3. "status": {
  4. "id": "query-3dc0c45d-34d7-4b15-86c9-cdb2d3ebfc4e",
  5. "groupId": "query-3dc0c45d-34d7-4b15-86c9-cdb2d3ebfc4e",
  6. "type": "query_controller",
  7. "createdTime": "2022-09-14T22:12:00.183Z",
  8. "queueInsertionTime": "1970-01-01T00:00:00.000Z",
  9. "statusCode": "RUNNING",
  10. "status": "RUNNING",
  11. "runnerStatusCode": "RUNNING",
  12. "duration": -1,
  13. "location": {
  14. "host": "localhost",
  15. "port": 8100,
  16. "tlsPort": -1
  17. },
  18. "dataSource": "kttm_simple",
  19. "errorMsg": null
  20. }
  21. }

Get the report for a query task

Retrieves the task report for a query. The report provides detailed information about the query task, including things like the stages, warnings, and errors.

Keep the following in mind when using the task API to view reports:

  • The task report for an entire job is associated with the query_controller task. The query_worker tasks don’t have their own reports; their information is incorporated into the controller report.
  • The task report API may report 404 Not Found temporarily while the task is in the process of starting up.
  • As an experimental feature, the MSQ task engine supports running SELECT queries. SELECT query results are written into the multiStageQuery.payload.results.results task report key as an array of arrays. The behavior and result format of plain SELECT queries (without INSERT or REPLACE) is subject to change.
  • multiStageQuery.payload.results.resultsTruncated denotes whether the results of the report have been truncated to prevent the reports from blowing up.

For an explanation of the fields in a report, see Report response fields.

URL

GET /druid/indexer/v1/task/{taskId}/reports

Responses

  • 200 SUCCESS

Successfully retrieved task report


Sample request

The following example shows how to retrieve the report for a query with the task ID query-3dc0c45d-34d7-4b15-86c9-cdb2d3ebfc4e.

  • HTTP
  • cURL
  • Python
  1. GET /druid/indexer/v1/task/query-3dc0c45d-34d7-4b15-86c9-cdb2d3ebfc4e/reports HTTP/1.1
  2. Host: http://ROUTER_IP:ROUTER_PORT
  1. curl --location --request GET 'http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/task/query-3dc0c45d-34d7-4b15-86c9-cdb2d3ebfc4e/reports'
  1. import requests
  2. url = "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/task/query-3dc0c45d-34d7-4b15-86c9-cdb2d3ebfc4e/reports"
  3. headers = {}
  4. response = requests.post(url, headers=headers, data=payload)
  5. print(response.text)
  6. print(response.text)

Sample response

The response shows an example report for a query.

View the response

  1. {
  2. "multiStageQuery": {
  3. "type": "multiStageQuery",
  4. "taskId": "query-3dc0c45d-34d7-4b15-86c9-cdb2d3ebfc4e",
  5. "payload": {
  6. "status": {
  7. "status": "SUCCESS",
  8. "startTime": "2022-09-14T22:12:09.266Z",
  9. "durationMs": 28227,
  10. "workers": {
  11. "0": [
  12. {
  13. "workerId": "query-3dc0c45d-34d7-4b15-86c9-cdb2d3ebfc4e-worker0_0",
  14. "state": "SUCCESS",
  15. "durationMs": 15511,
  16. "pendingMs": 137
  17. }
  18. ]
  19. },
  20. "pendingTasks": 0,
  21. "runningTasks": 2,
  22. "segmentLoadWaiterStatus": {
  23. "state": "SUCCESS",
  24. "dataSource": "kttm_simple",
  25. "startTime": "2022-09-14T23:12:09.266Z",
  26. "duration": 15,
  27. "totalSegments": 1,
  28. "usedSegments": 1,
  29. "precachedSegments": 0,
  30. "onDemandSegments": 0,
  31. "pendingSegments": 0,
  32. "unknownSegments": 0
  33. },
  34. "segmentReport": {
  35. "shardSpec": "NumberedShardSpec",
  36. "details": "Cannot use RangeShardSpec, RangedShardSpec only supports string CLUSTER BY keys. Using NumberedShardSpec instead."
  37. }
  38. },
  39. "stages": [
  40. {
  41. "stageNumber": 0,
  42. "definition": {
  43. "id": "71ecb11e-09d7-42f8-9225-1662c8e7e121_0",
  44. "input": [
  45. {
  46. "type": "external",
  47. "inputSource": {
  48. "type": "http",
  49. "uris": [
  50. "https://static.imply.io/example-data/kttm-v2/kttm-v2-2019-08-25.json.gz"
  51. ],
  52. "httpAuthenticationUsername": null,
  53. "httpAuthenticationPassword": null
  54. },
  55. "inputFormat": {
  56. "type": "json",
  57. "flattenSpec": null,
  58. "featureSpec": {},
  59. "keepNullColumns": false
  60. },
  61. "signature": [
  62. {
  63. "name": "timestamp",
  64. "type": "STRING"
  65. },
  66. {
  67. "name": "agent_category",
  68. "type": "STRING"
  69. },
  70. {
  71. "name": "agent_type",
  72. "type": "STRING"
  73. }
  74. ]
  75. }
  76. ],
  77. "processor": {
  78. "type": "scan",
  79. "query": {
  80. "queryType": "scan",
  81. "dataSource": {
  82. "type": "inputNumber",
  83. "inputNumber": 0
  84. },
  85. "intervals": {
  86. "type": "intervals",
  87. "intervals": [
  88. "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z"
  89. ]
  90. },
  91. "resultFormat": "compactedList",
  92. "columns": [
  93. "agent_category",
  94. "agent_type",
  95. "timestamp"
  96. ],
  97. "context": {
  98. "finalize": false,
  99. "finalizeAggregations": false,
  100. "groupByEnableMultiValueUnnesting": false,
  101. "scanSignature": "[{\"name\":\"agent_category\",\"type\":\"STRING\"},{\"name\":\"agent_type\",\"type\":\"STRING\"},{\"name\":\"timestamp\",\"type\":\"STRING\"}]",
  102. "sqlInsertSegmentGranularity": "{\"type\":\"all\"}",
  103. "sqlQueryId": "3dc0c45d-34d7-4b15-86c9-cdb2d3ebfc4e",
  104. "sqlReplaceTimeChunks": "all"
  105. },
  106. "granularity": {
  107. "type": "all"
  108. }
  109. }
  110. },
  111. "signature": [
  112. {
  113. "name": "__boost",
  114. "type": "LONG"
  115. },
  116. {
  117. "name": "agent_category",
  118. "type": "STRING"
  119. },
  120. {
  121. "name": "agent_type",
  122. "type": "STRING"
  123. },
  124. {
  125. "name": "timestamp",
  126. "type": "STRING"
  127. }
  128. ],
  129. "shuffleSpec": {
  130. "type": "targetSize",
  131. "clusterBy": {
  132. "columns": [
  133. {
  134. "columnName": "__boost"
  135. }
  136. ]
  137. },
  138. "targetSize": 3000000
  139. },
  140. "maxWorkerCount": 1,
  141. "shuffleCheckHasMultipleValues": true
  142. },
  143. "phase": "FINISHED",
  144. "workerCount": 1,
  145. "partitionCount": 1,
  146. "startTime": "2022-09-14T22:12:11.663Z",
  147. "duration": 19965,
  148. "sort": true
  149. },
  150. {
  151. "stageNumber": 1,
  152. "definition": {
  153. "id": "71ecb11e-09d7-42f8-9225-1662c8e7e121_1",
  154. "input": [
  155. {
  156. "type": "stage",
  157. "stage": 0
  158. }
  159. ],
  160. "processor": {
  161. "type": "segmentGenerator",
  162. "dataSchema": {
  163. "dataSource": "kttm_simple",
  164. "timestampSpec": {
  165. "column": "__time",
  166. "format": "millis",
  167. "missingValue": null
  168. },
  169. "dimensionsSpec": {
  170. "dimensions": [
  171. {
  172. "type": "string",
  173. "name": "timestamp",
  174. "multiValueHandling": "SORTED_ARRAY",
  175. "createBitmapIndex": true
  176. },
  177. {
  178. "type": "string",
  179. "name": "agent_category",
  180. "multiValueHandling": "SORTED_ARRAY",
  181. "createBitmapIndex": true
  182. },
  183. {
  184. "type": "string",
  185. "name": "agent_type",
  186. "multiValueHandling": "SORTED_ARRAY",
  187. "createBitmapIndex": true
  188. }
  189. ],
  190. "dimensionExclusions": [
  191. "__time"
  192. ],
  193. "includeAllDimensions": false
  194. },
  195. "metricsSpec": [],
  196. "granularitySpec": {
  197. "type": "arbitrary",
  198. "queryGranularity": {
  199. "type": "none"
  200. },
  201. "rollup": false,
  202. "intervals": [
  203. "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z"
  204. ]
  205. },
  206. "transformSpec": {
  207. "filter": null,
  208. "transforms": []
  209. }
  210. },
  211. "columnMappings": [
  212. {
  213. "queryColumn": "timestamp",
  214. "outputColumn": "timestamp"
  215. },
  216. {
  217. "queryColumn": "agent_category",
  218. "outputColumn": "agent_category"
  219. },
  220. {
  221. "queryColumn": "agent_type",
  222. "outputColumn": "agent_type"
  223. }
  224. ],
  225. "tuningConfig": {
  226. "maxNumWorkers": 1,
  227. "maxRowsInMemory": 100000,
  228. "rowsPerSegment": 3000000
  229. }
  230. },
  231. "signature": [],
  232. "maxWorkerCount": 1
  233. },
  234. "phase": "FINISHED",
  235. "workerCount": 1,
  236. "partitionCount": 1,
  237. "startTime": "2022-09-14T22:12:31.602Z",
  238. "duration": 5891
  239. }
  240. ],
  241. "counters": {
  242. "0": {
  243. "0": {
  244. "input0": {
  245. "type": "channel",
  246. "rows": [
  247. 465346
  248. ],
  249. "files": [
  250. 1
  251. ],
  252. "totalFiles": [
  253. 1
  254. ]
  255. },
  256. "output": {
  257. "type": "channel",
  258. "rows": [
  259. 465346
  260. ],
  261. "bytes": [
  262. 43694447
  263. ],
  264. "frames": [
  265. 7
  266. ]
  267. },
  268. "shuffle": {
  269. "type": "channel",
  270. "rows": [
  271. 465346
  272. ],
  273. "bytes": [
  274. 41835307
  275. ],
  276. "frames": [
  277. 73
  278. ]
  279. },
  280. "sortProgress": {
  281. "type": "sortProgress",
  282. "totalMergingLevels": 3,
  283. "levelToTotalBatches": {
  284. "0": 1,
  285. "1": 1,
  286. "2": 1
  287. },
  288. "levelToMergedBatches": {
  289. "0": 1,
  290. "1": 1,
  291. "2": 1
  292. },
  293. "totalMergersForUltimateLevel": 1,
  294. "progressDigest": 1
  295. }
  296. }
  297. },
  298. "1": {
  299. "0": {
  300. "input0": {
  301. "type": "channel",
  302. "rows": [
  303. 465346
  304. ],
  305. "bytes": [
  306. 41835307
  307. ],
  308. "frames": [
  309. 73
  310. ]
  311. },
  312. "segmentGenerationProgress": {
  313. "type": "segmentGenerationProgress",
  314. "rowsProcessed": 465346,
  315. "rowsPersisted": 465346,
  316. "rowsMerged": 465346
  317. }
  318. }
  319. }
  320. }
  321. }
  322. }
  323. }

The following table describes the response fields when you retrieve a report for a MSQ task engine using the /druid/indexer/v1/task/{taskId}/reports endpoint:

FieldDescription
multiStageQuery.taskIdController task ID.
multiStageQuery.payload.statusQuery status container.
multiStageQuery.payload.status.statusRUNNING, SUCCESS, or FAILED.
multiStageQuery.payload.status.startTimeStart time of the query in ISO format. Only present if the query has started running.
multiStageQuery.payload.status.durationMsMilliseconds elapsed after the query has started running. -1 denotes that the query hasn’t started running yet.
multiStageQuery.payload.status.workersWorkers for the controller task.
multiStageQuery.payload.status.workers.<workerNumber>Array of worker tasks including retries.
multiStageQuery.payload.status.workers.<workerNumber>[].workerIdId of the worker task.
multiStageQuery.payload.status.workers.<workerNumber>[].statusRUNNING, SUCCESS, or FAILED.
multiStageQuery.payload.status.workers.<workerNumber>[].durationMsMilliseconds elapsed between when the worker task was first requested and when it finished. It is -1 for worker tasks with status RUNNING.
multiStageQuery.payload.status.workers.<workerNumber>[].pendingMsMilliseconds elapsed between when the worker task was first requested and when it fully started RUNNING. Actual work time can be calculated using actualWorkTimeMS = durationMs - pendingMs.
multiStageQuery.payload.status.pendingTasksNumber of tasks that are not fully started. -1 denotes that the number is currently unknown.
multiStageQuery.payload.status.runningTasksNumber of currently running tasks. Should be at least 1 since the controller is included.
multiStageQuery.payload.status.segmentLoadStatusSegment loading container. Only present after the segments have been published.
multiStageQuery.payload.status.segmentLoadStatus.stateEither INIT, WAITING, SUCCESS, FAILED or TIMED_OUT.
multiStageQuery.payload.status.segmentLoadStatus.startTimeTime since which the controller has been waiting for the segments to finish loading.
multiStageQuery.payload.status.segmentLoadStatus.durationThe duration in milliseconds that the controller has been waiting for the segments to load.
multiStageQuery.payload.status.segmentLoadStatus.totalSegmentsThe total number of segments generated by the job. This includes tombstone segments (if any).
multiStageQuery.payload.status.segmentLoadStatus.usedSegmentsThe number of segments which are marked as used based on the load rules. Unused segments can be cleaned up at any time.
multiStageQuery.payload.status.segmentLoadStatus.precachedSegmentsThe number of segments which are marked as precached and served by historicals, as per the load rules.
multiStageQuery.payload.status.segmentLoadStatus.onDemandSegmentsThe number of segments which are not loaded on any historical, as per the load rules.
multiStageQuery.payload.status.segmentLoadStatus.pendingSegmentsThe number of segments remaining to be loaded.
multiStageQuery.payload.status.segmentLoadStatus.unknownSegmentsThe number of segments whose status is unknown.
multiStageQuery.payload.status.segmentReportSegment report. Only present if the query is an ingestion.
multiStageQuery.payload.status.segmentReport.shardSpecContains the shard spec chosen.
multiStageQuery.payload.status.segmentReport.detailsContains further reasoning about the shard spec chosen.
multiStageQuery.payload.status.errorReportError object. Only present if there was an error.
multiStageQuery.payload.status.errorReport.taskIdThe task that reported the error, if known. May be a controller task or a worker task.
multiStageQuery.payload.status.errorReport.hostThe hostname and port of the task that reported the error, if known.
multiStageQuery.payload.status.errorReport.stageNumberThe stage number that reported the error, if it happened during execution of a specific stage.
multiStageQuery.payload.status.errorReport.errorError object. Contains errorCode at a minimum, and may contain other fields as described in the error code table. Always present if there is an error.
multiStageQuery.payload.status.errorReport.error.errorCodeOne of the error codes from the error code table. Always present if there is an error.
multiStageQuery.payload.status.errorReport.error.errorMessageUser-friendly error message. Not always present, even if there is an error.
multiStageQuery.payload.status.errorReport.exceptionStackTraceJava stack trace in string form, if the error was due to a server-side exception.
multiStageQuery.payload.stagesArray of query stages.
multiStageQuery.payload.stages[].stageNumberEach stage has a number that differentiates it from other stages.
multiStageQuery.payload.stages[].phaseEither NEW, READING_INPUT, POST_READING, RESULTS_COMPLETE, or FAILED. Only present if the stage has started.
multiStageQuery.payload.stages[].workerCountNumber of parallel tasks that this stage is running on. Only present if the stage has started.
multiStageQuery.payload.stages[].partitionCountNumber of output partitions generated by this stage. Only present if the stage has started and has computed its number of output partitions.
multiStageQuery.payload.stages[].startTimeStart time of this stage. Only present if the stage has started.
multiStageQuery.payload.stages[].durationThe number of milliseconds that the stage has been running. Only present if the stage has started.
multiStageQuery.payload.stages[].sortA boolean that is set to true if the stage does a sort as part of its execution.
multiStageQuery.payload.stages[].definitionThe object defining what the stage does.
multiStageQuery.payload.stages[].definition.idThe unique identifier of the stage.
multiStageQuery.payload.stages[].definition.inputArray of inputs that the stage has.
multiStageQuery.payload.stages[].definition.broadcastArray of input indexes that get broadcasted. Only present if there are inputs that get broadcasted.
multiStageQuery.payload.stages[].definition.processorAn object defining the processor logic.
multiStageQuery.payload.stages[].definition.signatureThe output signature of the stage.

Cancel a query task

Cancels a query task. Returns a JSON object with the ID of the task that was canceled successfully.

URL

POST /druid/indexer/v1/task/{taskId}/shutdown

Responses

  • 200 SUCCESS
  • 404 NOT FOUND

Successfully shut down task

Cannot find task with ID or task is no longer running


Sample request

The following example shows how to cancel a query task with the ID query-655efe33-781a-4c50-ae84-c2911b42d63c.

  • HTTP
  • cURL
  • Python
  1. POST /druid/indexer/v1/task/query-655efe33-781a-4c50-ae84-c2911b42d63c/shutdown HTTP/1.1
  2. Host: http://ROUTER_IP:ROUTER_PORT
  1. curl --location --request POST 'http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/task/query-655efe33-781a-4c50-ae84-c2911b42d63c/shutdown'
  1. import requests
  2. url = "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/task/query-655efe33-781a-4c50-ae84-c2911b42d63c/shutdown"
  3. payload = {}
  4. headers = {}
  5. response = requests.post(url, headers=headers, data=payload)
  6. print(response.text)
  7. print(response.text)

Sample response

The response shows the ID of the task that was canceled.

  1. {
  2. "task": "query-655efe33-781a-4c50-ae84-c2911b42d63c"
  3. }