Tasks API

This document describes the API endpoints for task retrieval, submission, and deletion for Apache Druid. Tasks are individual jobs performed by Druid to complete operations such as ingestion, querying, and compaction.

In this topic, http://ROUTER_IP:ROUTER_PORT is a placeholder for the Router service address and port. For example, on the quickstart configuration, use http://localhost:8888.

Task information and retrieval

Get an array of tasks

Retrieves an array of all tasks in the Druid cluster. Each task object includes information on its ID, status, associated datasource, and other metadata. For definitions of the response properties, see the Tasks table.

URL

GET /druid/indexer/v1/tasks

Query parameters

The endpoint supports a set of optional query parameters to filter results.

ParameterTypeDescription
stateStringFilter list of tasks by task state, valid options are running, complete, waiting, and pending.
datasourceStringReturn tasks filtered by Druid datasource.
createdTimeIntervalString (ISO-8601)Return tasks created within the specified interval. Use _ as the delimiter for the interval string. Do not use /. For example, 2023-06-27_2023-06-28.
maxIntegerMaximum number of complete tasks to return. Only applies when state is set to complete.
typeStringFilter tasks by task type. See task documentation for more details.

Responses

  • 200 SUCCESS
  • 400 BAD REQUEST
  • 500 SERVER ERROR

Successfully retrieved list of tasks

Invalid state query parameter value

Invalid query parameter


Sample request

The following example shows how to retrieve a list of tasks filtered with the following query parameters:

  • State: complete
  • Datasource: wikipedia_api
  • Time interval: between 2015-09-12 and 2015-09-13
  • Max entries returned: 10
  • Task type: query_worker

  • cURL

  • HTTP
  1. curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/tasks/?state=complete&datasource=wikipedia_api&createdTimeInterval=2015-09-12_2015-09-13&max=10&type=query_worker"
  1. GET /druid/indexer/v1/tasks/?state=complete&datasource=wikipedia_api&createdTimeInterval=2015-09-12_2015-09-13&max=10&type=query_worker HTTP/1.1
  2. Host: http://ROUTER_IP:ROUTER_PORT

Sample response

View the response

  1. [
  2. {
  3. "id": "query-223549f8-b993-4483-b028-1b0d54713cad-worker0_0",
  4. "groupId": "query-223549f8-b993-4483-b028-1b0d54713cad",
  5. "type": "query_worker",
  6. "createdTime": "2023-06-22T22:11:37.012Z",
  7. "queueInsertionTime": "1970-01-01T00:00:00.000Z",
  8. "statusCode": "SUCCESS",
  9. "status": "SUCCESS",
  10. "runnerStatusCode": "NONE",
  11. "duration": 17897,
  12. "location": {
  13. "host": "localhost",
  14. "port": 8101,
  15. "tlsPort": -1
  16. },
  17. "dataSource": "wikipedia_api",
  18. "errorMsg": null
  19. },
  20. {
  21. "id": "query-fa82fa40-4c8c-4777-b832-cabbee5f519f-worker0_0",
  22. "groupId": "query-fa82fa40-4c8c-4777-b832-cabbee5f519f",
  23. "type": "query_worker",
  24. "createdTime": "2023-06-20T22:51:21.302Z",
  25. "queueInsertionTime": "1970-01-01T00:00:00.000Z",
  26. "statusCode": "SUCCESS",
  27. "status": "SUCCESS",
  28. "runnerStatusCode": "NONE",
  29. "duration": 16911,
  30. "location": {
  31. "host": "localhost",
  32. "port": 8101,
  33. "tlsPort": -1
  34. },
  35. "dataSource": "wikipedia_api",
  36. "errorMsg": null
  37. },
  38. {
  39. "id": "query-5419da7a-b270-492f-90e6-920ecfba766a-worker0_0",
  40. "groupId": "query-5419da7a-b270-492f-90e6-920ecfba766a",
  41. "type": "query_worker",
  42. "createdTime": "2023-06-20T22:45:53.909Z",
  43. "queueInsertionTime": "1970-01-01T00:00:00.000Z",
  44. "statusCode": "SUCCESS",
  45. "status": "SUCCESS",
  46. "runnerStatusCode": "NONE",
  47. "duration": 17030,
  48. "location": {
  49. "host": "localhost",
  50. "port": 8101,
  51. "tlsPort": -1
  52. },
  53. "dataSource": "wikipedia_api",
  54. "errorMsg": null
  55. }
  56. ]

Get an array of complete tasks

Retrieves an array of completed tasks in the Druid cluster. This is functionally equivalent to /druid/indexer/v1/tasks?state=complete. For definitions of the response properties, see the Tasks table.

URL

GET /druid/indexer/v1/completeTasks

Query parameters

The endpoint supports a set of optional query parameters to filter results.

ParameterTypeDescription
datasourceStringReturn tasks filtered by Druid datasource.
createdTimeIntervalString (ISO-8601)Return tasks created within the specified interval. The interval string should be delimited by _ instead of /. For example, 2023-06-27_2023-06-28.
maxIntegerMaximum number of complete tasks to return. Only applies when state is set to complete.
typeStringFilter tasks by task type. See task documentation for more details.

Responses

  • 200 SUCCESS
  • 404 NOT FOUND

Successfully retrieved list of complete tasks

Request sent to incorrect service


Sample request

  • cURL
  • HTTP
  1. curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/completeTasks"
  1. GET /druid/indexer/v1/completeTasks HTTP/1.1
  2. Host: http://ROUTER_IP:ROUTER_PORT

Sample response

View the response

  1. [
  2. {
  3. "id": "query-223549f8-b993-4483-b028-1b0d54713cad-worker0_0",
  4. "groupId": "query-223549f8-b993-4483-b028-1b0d54713cad",
  5. "type": "query_worker",
  6. "createdTime": "2023-06-22T22:11:37.012Z",
  7. "queueInsertionTime": "1970-01-01T00:00:00.000Z",
  8. "statusCode": "SUCCESS",
  9. "status": "SUCCESS",
  10. "runnerStatusCode": "NONE",
  11. "duration": 17897,
  12. "location": {
  13. "host": "localhost",
  14. "port": 8101,
  15. "tlsPort": -1
  16. },
  17. "dataSource": "wikipedia_api",
  18. "errorMsg": null
  19. },
  20. {
  21. "id": "query-223549f8-b993-4483-b028-1b0d54713cad",
  22. "groupId": "query-223549f8-b993-4483-b028-1b0d54713cad",
  23. "type": "query_controller",
  24. "createdTime": "2023-06-22T22:11:28.367Z",
  25. "queueInsertionTime": "1970-01-01T00:00:00.000Z",
  26. "statusCode": "SUCCESS",
  27. "status": "SUCCESS",
  28. "runnerStatusCode": "NONE",
  29. "duration": 30317,
  30. "location": {
  31. "host": "localhost",
  32. "port": 8100,
  33. "tlsPort": -1
  34. },
  35. "dataSource": "wikipedia_api",
  36. "errorMsg": null
  37. }
  38. ]

Get an array of running tasks

Retrieves an array of running task objects in the Druid cluster. It is functionally equivalent to /druid/indexer/v1/tasks?state=running. For definitions of the response properties, see the Tasks table.

URL

GET /druid/indexer/v1/runningTasks

Query parameters

The endpoint supports a set of optional query parameters to filter results.

ParameterTypeDescription
datasourceStringReturn tasks filtered by Druid datasource.
createdTimeIntervalString (ISO-8601)Return tasks created within the specified interval. The interval string should be delimited by _ instead of /. For example, 2023-06-27_2023-06-28.
maxIntegerMaximum number of complete tasks to return. Only applies when state is set to complete.
typeStringFilter tasks by task type. See task documentation for more details.

Responses

  • 200 SUCCESS

Successfully retrieved list of running tasks


Sample request

  • cURL
  • HTTP
  1. curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/runningTasks"
  1. GET /druid/indexer/v1/runningTasks HTTP/1.1
  2. Host: http://ROUTER_IP:ROUTER_PORT

Sample response

View the response

  1. [
  2. {
  3. "id": "query-32663269-ead9-405a-8eb6-0817a952ef47",
  4. "groupId": "query-32663269-ead9-405a-8eb6-0817a952ef47",
  5. "type": "query_controller",
  6. "createdTime": "2023-06-22T22:54:43.170Z",
  7. "queueInsertionTime": "2023-06-22T22:54:43.170Z",
  8. "statusCode": "RUNNING",
  9. "status": "RUNNING",
  10. "runnerStatusCode": "RUNNING",
  11. "duration": -1,
  12. "location": {
  13. "host": "localhost",
  14. "port": 8100,
  15. "tlsPort": -1
  16. },
  17. "dataSource": "wikipedia_api",
  18. "errorMsg": null
  19. }
  20. ]

Get an array of waiting tasks

Retrieves an array of waiting tasks in the Druid cluster. It is functionally equivalent to /druid/indexer/v1/tasks?state=waiting. For definitions of the response properties, see the Tasks table.

URL

GET /druid/indexer/v1/waitingTasks

Query parameters

The endpoint supports a set of optional query parameters to filter results.

ParameterTypeDescription
datasourceStringReturn tasks filtered by Druid datasource.
createdTimeIntervalString (ISO-8601)Return tasks created within the specified interval. The interval string should be delimited by _ instead of /. For example, 2023-06-27_2023-06-28.
maxIntegerMaximum number of complete tasks to return. Only applies when state is set to complete.
typeStringFilter tasks by task type. See task documentation for more details.

Responses

  • 200 SUCCESS

Successfully retrieved list of waiting tasks


Sample request

  • cURL
  • HTTP
  1. curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/waitingTasks"
  1. GET /druid/indexer/v1/waitingTasks HTTP/1.1
  2. Host: http://ROUTER_IP:ROUTER_PORT

Sample response

View the response

  1. [
  2. {
  3. "id": "index_parallel_wikipedia_auto_biahcbmf_2023-06-26T21:08:05.216Z",
  4. "groupId": "index_parallel_wikipedia_auto_biahcbmf_2023-06-26T21:08:05.216Z",
  5. "type": "index_parallel",
  6. "createdTime": "2023-06-26T21:08:05.217Z",
  7. "queueInsertionTime": "1970-01-01T00:00:00.000Z",
  8. "statusCode": "RUNNING",
  9. "status": "RUNNING",
  10. "runnerStatusCode": "WAITING",
  11. "duration": -1,
  12. "location": {
  13. "host": null,
  14. "port": -1,
  15. "tlsPort": -1
  16. },
  17. "dataSource": "wikipedia_auto",
  18. "errorMsg": null
  19. },
  20. {
  21. "id": "index_parallel_wikipedia_auto_afggfiec_2023-06-26T21:08:05.546Z",
  22. "groupId": "index_parallel_wikipedia_auto_afggfiec_2023-06-26T21:08:05.546Z",
  23. "type": "index_parallel",
  24. "createdTime": "2023-06-26T21:08:05.548Z",
  25. "queueInsertionTime": "1970-01-01T00:00:00.000Z",
  26. "statusCode": "RUNNING",
  27. "status": "RUNNING",
  28. "runnerStatusCode": "WAITING",
  29. "duration": -1,
  30. "location": {
  31. "host": null,
  32. "port": -1,
  33. "tlsPort": -1
  34. },
  35. "dataSource": "wikipedia_auto",
  36. "errorMsg": null
  37. },
  38. {
  39. "id": "index_parallel_wikipedia_auto_jmmddihf_2023-06-26T21:08:06.644Z",
  40. "groupId": "index_parallel_wikipedia_auto_jmmddihf_2023-06-26T21:08:06.644Z",
  41. "type": "index_parallel",
  42. "createdTime": "2023-06-26T21:08:06.671Z",
  43. "queueInsertionTime": "1970-01-01T00:00:00.000Z",
  44. "statusCode": "RUNNING",
  45. "status": "RUNNING",
  46. "runnerStatusCode": "WAITING",
  47. "duration": -1,
  48. "location": {
  49. "host": null,
  50. "port": -1,
  51. "tlsPort": -1
  52. },
  53. "dataSource": "wikipedia_auto",
  54. "errorMsg": null
  55. }
  56. ]

Get an array of pending tasks

Retrieves an array of pending tasks in the Druid cluster. It is functionally equivalent to /druid/indexer/v1/tasks?state=pending. For definitions of the response properties, see the Tasks table.

URL

GET /druid/indexer/v1/pendingTasks

Query parameters

The endpoint supports a set of optional query parameters to filter results.

ParameterTypeDescription
datasourceStringReturn tasks filtered by Druid datasource.
createdTimeIntervalString (ISO-8601)Return tasks created within the specified interval. The interval string should be delimited by _ instead of /. For example, 2023-06-27_2023-06-28.
maxIntegerMaximum number of complete tasks to return. Only applies when state is set to complete.
typeStringFilter tasks by task type. See task documentation for more details.

Responses

  • 200 SUCCESS

Successfully retrieved list of pending tasks


Sample request

  • cURL
  • HTTP
  1. curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/pendingTasks"
  1. GET /druid/indexer/v1/pendingTasks HTTP/1.1
  2. Host: http://ROUTER_IP:ROUTER_PORT

Sample response

View the response

  1. [
  2. {
  3. "id": "query-7b37c315-50a0-4b68-aaa8-b1ef1f060e67",
  4. "groupId": "query-7b37c315-50a0-4b68-aaa8-b1ef1f060e67",
  5. "type": "query_controller",
  6. "createdTime": "2023-06-23T19:53:06.037Z",
  7. "queueInsertionTime": "2023-06-23T19:53:06.037Z",
  8. "statusCode": "RUNNING",
  9. "status": "RUNNING",
  10. "runnerStatusCode": "PENDING",
  11. "duration": -1,
  12. "location": {
  13. "host": null,
  14. "port": -1,
  15. "tlsPort": -1
  16. },
  17. "dataSource": "wikipedia_api",
  18. "errorMsg": null
  19. },
  20. {
  21. "id": "query-544f0c41-f81d-4504-b98b-f9ab8b36ef36",
  22. "groupId": "query-544f0c41-f81d-4504-b98b-f9ab8b36ef36",
  23. "type": "query_controller",
  24. "createdTime": "2023-06-23T19:53:06.616Z",
  25. "queueInsertionTime": "2023-06-23T19:53:06.616Z",
  26. "statusCode": "RUNNING",
  27. "status": "RUNNING",
  28. "runnerStatusCode": "PENDING",
  29. "duration": -1,
  30. "location": {
  31. "host": null,
  32. "port": -1,
  33. "tlsPort": -1
  34. },
  35. "dataSource": "wikipedia_api",
  36. "errorMsg": null
  37. }
  38. ]

Get task payload

Retrieves the payload of a task given the task ID. It returns a JSON object with the task ID and payload that includes task configuration details and relevant specifications associated with the execution of the task.

URL

GET /druid/indexer/v1/task/{taskId}

Responses

  • 200 SUCCESS
  • 404 NOT FOUND

Successfully retrieved payload of task

Cannot find task with ID


Sample request

The following examples shows how to retrieve the task payload of a task with the specified ID index_parallel_wikipedia_short_iajoonnd_2023-07-07T17:53:12.174Z.

  • cURL
  • HTTP
  1. curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/task/index_parallel_wikipedia_short_iajoonnd_2023-07-07T17:53:12.174Z"
  1. GET /druid/indexer/v1/task/index_parallel_wikipedia_short_iajoonnd_2023-07-07T17:53:12.174Z HTTP/1.1
  2. Host: http://ROUTER_IP:ROUTER_PORT

Sample response

View the response

  1. {
  2. "task": "index_parallel_wikipedia_short_iajoonnd_2023-07-07T17:53:12.174Z",
  3. "payload": {
  4. "type": "index_parallel",
  5. "id": "index_parallel_wikipedia_short_iajoonnd_2023-07-07T17:53:12.174Z",
  6. "groupId": "index_parallel_wikipedia_short_iajoonnd_2023-07-07T17:53:12.174Z",
  7. "resource": {
  8. "availabilityGroup": "index_parallel_wikipedia_short_iajoonnd_2023-07-07T17:53:12.174Z",
  9. "requiredCapacity": 1
  10. },
  11. "spec": {
  12. "dataSchema": {
  13. "dataSource": "wikipedia_short",
  14. "timestampSpec": {
  15. "column": "time",
  16. "format": "iso",
  17. "missingValue": null
  18. },
  19. "dimensionsSpec": {
  20. "dimensions": [
  21. {
  22. "type": "string",
  23. "name": "cityName",
  24. "multiValueHandling": "SORTED_ARRAY",
  25. "createBitmapIndex": true
  26. },
  27. {
  28. "type": "string",
  29. "name": "countryName",
  30. "multiValueHandling": "SORTED_ARRAY",
  31. "createBitmapIndex": true
  32. },
  33. {
  34. "type": "string",
  35. "name": "regionName",
  36. "multiValueHandling": "SORTED_ARRAY",
  37. "createBitmapIndex": true
  38. }
  39. ],
  40. "dimensionExclusions": [
  41. "__time",
  42. "time"
  43. ],
  44. "includeAllDimensions": false,
  45. "useSchemaDiscovery": false
  46. },
  47. "metricsSpec": [],
  48. "granularitySpec": {
  49. "type": "uniform",
  50. "segmentGranularity": "DAY",
  51. "queryGranularity": {
  52. "type": "none"
  53. },
  54. "rollup": false,
  55. "intervals": [
  56. "2015-09-12T00:00:00.000Z/2015-09-13T00:00:00.000Z"
  57. ]
  58. },
  59. "transformSpec": {
  60. "filter": null,
  61. "transforms": []
  62. }
  63. },
  64. "ioConfig": {
  65. "type": "index_parallel",
  66. "inputSource": {
  67. "type": "local",
  68. "baseDir": "quickstart/tutorial",
  69. "filter": "wikiticker-2015-09-12-sampled.json.gz"
  70. },
  71. "inputFormat": {
  72. "type": "json",
  73. "keepNullColumns": false,
  74. "assumeNewlineDelimited": false,
  75. "useJsonNodeReader": false
  76. },
  77. "appendToExisting": false,
  78. "dropExisting": false
  79. },
  80. "tuningConfig": {
  81. "type": "index_parallel",
  82. "maxRowsPerSegment": 5000000,
  83. "appendableIndexSpec": {
  84. "type": "onheap",
  85. "preserveExistingMetrics": false
  86. },
  87. "maxRowsInMemory": 25000,
  88. "maxBytesInMemory": 0,
  89. "skipBytesInMemoryOverheadCheck": false,
  90. "maxTotalRows": null,
  91. "numShards": null,
  92. "splitHintSpec": null,
  93. "partitionsSpec": {
  94. "type": "dynamic",
  95. "maxRowsPerSegment": 5000000,
  96. "maxTotalRows": null
  97. },
  98. "indexSpec": {
  99. "bitmap": {
  100. "type": "roaring"
  101. },
  102. "dimensionCompression": "lz4",
  103. "stringDictionaryEncoding": {
  104. "type": "utf8"
  105. },
  106. "metricCompression": "lz4",
  107. "longEncoding": "longs"
  108. },
  109. "indexSpecForIntermediatePersists": {
  110. "bitmap": {
  111. "type": "roaring"
  112. },
  113. "dimensionCompression": "lz4",
  114. "stringDictionaryEncoding": {
  115. "type": "utf8"
  116. },
  117. "metricCompression": "lz4",
  118. "longEncoding": "longs"
  119. },
  120. "maxPendingPersists": 0,
  121. "forceGuaranteedRollup": false,
  122. "reportParseExceptions": false,
  123. "pushTimeout": 0,
  124. "segmentWriteOutMediumFactory": null,
  125. "maxNumConcurrentSubTasks": 1,
  126. "maxRetry": 3,
  127. "taskStatusCheckPeriodMs": 1000,
  128. "chatHandlerTimeout": "PT10S",
  129. "chatHandlerNumRetries": 5,
  130. "maxNumSegmentsToMerge": 100,
  131. "totalNumMergeTasks": 10,
  132. "logParseExceptions": false,
  133. "maxParseExceptions": 2147483647,
  134. "maxSavedParseExceptions": 0,
  135. "maxColumnsToMerge": -1,
  136. "awaitSegmentAvailabilityTimeoutMillis": 0,
  137. "maxAllowedLockCount": -1,
  138. "partitionDimensions": []
  139. }
  140. },
  141. "context": {
  142. "forceTimeChunkLock": true,
  143. "useLineageBasedSegmentAllocation": true
  144. },
  145. "dataSource": "wikipedia_short"
  146. }
  147. }

Get task status

Retrieves the status of a task given the task ID. It returns a JSON object with the task’s status code, runner status, task type, datasource, and other relevant metadata.

URL

GET /druid/indexer/v1/task/{taskId}/status

Responses

  • 200 SUCCESS
  • 404 NOT FOUND

Successfully retrieved task status

Cannot find task with ID


Sample request

The following examples shows how to retrieve the status of a task with the specified ID query-223549f8-b993-4483-b028-1b0d54713cad.

  • cURL
  • HTTP
  1. curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/task/query-223549f8-b993-4483-b028-1b0d54713cad/status"
  1. GET /druid/indexer/v1/task/query-223549f8-b993-4483-b028-1b0d54713cad/status HTTP/1.1
  2. Host: http://ROUTER_IP:ROUTER_PORT

Sample response

View the response

  1. {
  2. "task": "query-223549f8-b993-4483-b028-1b0d54713cad",
  3. "status": {
  4. "id": "query-223549f8-b993-4483-b028-1b0d54713cad",
  5. "groupId": "query-223549f8-b993-4483-b028-1b0d54713cad",
  6. "type": "query_controller",
  7. "createdTime": "2023-06-22T22:11:28.367Z",
  8. "queueInsertionTime": "1970-01-01T00:00:00.000Z",
  9. "statusCode": "RUNNING",
  10. "status": "RUNNING",
  11. "runnerStatusCode": "RUNNING",
  12. "duration": -1,
  13. "location": {"host": "localhost", "port": 8100, "tlsPort": -1},
  14. "dataSource": "wikipedia_api",
  15. "errorMsg": null
  16. }
  17. }

Get task segments

Tasks - 图1info

This API is deprecated and will be removed in future releases.

Retrieves information about segments generated by the task given the task ID. To hit this endpoint, make sure to enable the audit log config on the Overlord with druid.indexer.auditLog.enabled = true.

In addition to enabling audit logs, configure a cleanup strategy to prevent overloading the metadata store with old audit logs which may cause performance issues. To enable automated cleanup of audit logs on the Coordinator, set druid.coordinator.kill.audit.on. You may also manually export the audit logs to external storage. For more information, see Audit records.

URL

GET /druid/indexer/v1/task/{taskId}/segments

Responses

  • 200 SUCCESS

Successfully retrieved task segments


Sample request

The following examples shows how to retrieve the task segment of the task with the specified ID query-52a8aafe-7265-4427-89fe-dc51275cc470.

  • cURL
  • HTTP
  1. curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/task/query-52a8aafe-7265-4427-89fe-dc51275cc470/reports"
  1. GET /druid/indexer/v1/task/query-52a8aafe-7265-4427-89fe-dc51275cc470/reports HTTP/1.1
  2. Host: http://ROUTER_IP:ROUTER_PORT

Sample response

A successful request returns a 200 OK response and an array of the task segments.

Get task log

Retrieves the event log associated with a task. It returns a list of logged events during the lifecycle of the task. The endpoint is useful for providing information about the execution of the task, including any errors or warnings raised.

Task logs are automatically retrieved from the Middle Manager/Indexer or in long-term storage. For reference, see Task logs.

URL

GET /druid/indexer/v1/task/{taskId}/log

Query parameters

  • offset (optional)
    • Type: Int
    • Exclude the first passed in number of entries from the response.

Responses

  • 200 SUCCESS

Successfully retrieved task log


Sample request

The following examples shows how to retrieve the task log of a task with the specified ID index_kafka_social_media_0e905aa31037879_nommnaeg.

  • cURL
  • HTTP
  1. curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/task/index_kafka_social_media_0e905aa31037879_nommnaeg/log"
  1. GET /druid/indexer/v1/task/index_kafka_social_media_0e905aa31037879_nommnaeg/log HTTP/1.1
  2. Host: http://ROUTER_IP:ROUTER_PORT

Sample response

View the response

  1. 2023-07-03T22:11:17,891 INFO [qtp1251996697-122] org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Sequence[index_kafka_social_media_0e905aa31037879_0] end offsets updated from [{0=9223372036854775807}] to [{0=230985}].
  2. 2023-07-03T22:11:17,900 INFO [qtp1251996697-122] org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Saved sequence metadata to disk: [SequenceMetadata{sequenceId=0, sequenceName='index_kafka_social_media_0e905aa31037879_0', assignments=[0], startOffsets={0=230985}, exclusiveStartPartitions=[], endOffsets={0=230985}, sentinel=false, checkpointed=true}]
  3. 2023-07-03T22:11:17,901 INFO [task-runner-0-priority-0] org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Received resume command, resuming ingestion.
  4. 2023-07-03T22:11:17,901 INFO [task-runner-0-priority-0] org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Finished reading partition[0], up to[230985].
  5. 2023-07-03T22:11:17,902 INFO [task-runner-0-priority-0] org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-kafka-supervisor-dcanhmig-1, groupId=kafka-supervisor-dcanhmig] Resetting generation and member id due to: consumer pro-actively leaving the group
  6. 2023-07-03T22:11:17,902 INFO [task-runner-0-priority-0] org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-kafka-supervisor-dcanhmig-1, groupId=kafka-supervisor-dcanhmig] Request joining group due to: consumer pro-actively leaving the group
  7. 2023-07-03T22:11:17,902 INFO [task-runner-0-priority-0] org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-kafka-supervisor-dcanhmig-1, groupId=kafka-supervisor-dcanhmig] Unsubscribed all topics or patterns and assigned partitions
  8. 2023-07-03T22:11:17,912 INFO [task-runner-0-priority-0] org.apache.druid.segment.realtime.appenderator.StreamAppenderator - Persisted rows[0] and (estimated) bytes[0]
  9. 2023-07-03T22:11:17,916 INFO [[index_kafka_social_media_0e905aa31037879_nommnaeg]-appenderator-persist] org.apache.druid.segment.realtime.appenderator.StreamAppenderator - Flushed in-memory data with commit metadata [AppenderatorDriverMetadata{segments={}, lastSegmentIds={}, callerMetadata={nextPartitions=SeekableStreamEndSequenceNumbers{stream='social_media', partitionSequenceNumberMap={0=230985}}}}] for segments:
  10. 2023-07-03T22:11:17,917 INFO [[index_kafka_social_media_0e905aa31037879_nommnaeg]-appenderator-persist] org.apache.druid.segment.realtime.appenderator.StreamAppenderator - Persisted stats: processed rows: [0], persisted rows[0], sinks: [0], total fireHydrants (across sinks): [0], persisted fireHydrants (across sinks): [0]
  11. 2023-07-03T22:11:17,919 INFO [task-runner-0-priority-0] org.apache.druid.segment.realtime.appenderator.BaseAppenderatorDriver - Pushing [0] segments in background
  12. 2023-07-03T22:11:17,921 INFO [task-runner-0-priority-0] org.apache.druid.segment.realtime.appenderator.StreamAppenderator - Persisted rows[0] and (estimated) bytes[0]
  13. 2023-07-03T22:11:17,924 INFO [[index_kafka_social_media_0e905aa31037879_nommnaeg]-appenderator-persist] org.apache.druid.segment.realtime.appenderator.StreamAppenderator - Flushed in-memory data with commit metadata [AppenderatorDriverMetadata{segments={}, lastSegmentIds={}, callerMetadata={nextPartitions=SeekableStreamStartSequenceNumbers{stream='social_media', partitionSequenceNumberMap={0=230985}, exclusivePartitions=[]}, publishPartitions=SeekableStreamEndSequenceNumbers{stream='social_media', partitionSequenceNumberMap={0=230985}}}}] for segments:
  14. 2023-07-03T22:11:17,924 INFO [[index_kafka_social_media_0e905aa31037879_nommnaeg]-appenderator-persist] org.apache.druid.segment.realtime.appenderator.StreamAppenderator - Persisted stats: processed rows: [0], persisted rows[0], sinks: [0], total fireHydrants (across sinks): [0], persisted fireHydrants (across sinks): [0]
  15. 2023-07-03T22:11:17,925 INFO [[index_kafka_social_media_0e905aa31037879_nommnaeg]-appenderator-merge] org.apache.druid.segment.realtime.appenderator.StreamAppenderator - Preparing to push (stats): processed rows: [0], sinks: [0], fireHydrants (across sinks): [0]
  16. 2023-07-03T22:11:17,925 INFO [[index_kafka_social_media_0e905aa31037879_nommnaeg]-appenderator-merge] org.apache.druid.segment.realtime.appenderator.StreamAppenderator - Push complete...
  17. 2023-07-03T22:11:17,929 INFO [[index_kafka_social_media_0e905aa31037879_nommnaeg]-publish] org.apache.druid.indexing.seekablestream.SequenceMetadata - With empty segment set, start offsets [SeekableStreamStartSequenceNumbers{stream='social_media', partitionSequenceNumberMap={0=230985}, exclusivePartitions=[]}] and end offsets [SeekableStreamEndSequenceNumbers{stream='social_media', partitionSequenceNumberMap={0=230985}}] are the same, skipping metadata commit.
  18. 2023-07-03T22:11:17,930 INFO [[index_kafka_social_media_0e905aa31037879_nommnaeg]-publish] org.apache.druid.segment.realtime.appenderator.BaseAppenderatorDriver - Published [0] segments with commit metadata [{nextPartitions=SeekableStreamStartSequenceNumbers{stream='social_media', partitionSequenceNumberMap={0=230985}, exclusivePartitions=[]}, publishPartitions=SeekableStreamEndSequenceNumbers{stream='social_media', partitionSequenceNumberMap={0=230985}}}]
  19. 2023-07-03T22:11:17,930 INFO [[index_kafka_social_media_0e905aa31037879_nommnaeg]-publish] org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Published 0 segments for sequence [index_kafka_social_media_0e905aa31037879_0] with metadata [AppenderatorDriverMetadata{segments={}, lastSegmentIds={}, callerMetadata={nextPartitions=SeekableStreamStartSequenceNumbers{stream='social_media', partitionSequenceNumberMap={0=230985}, exclusivePartitions=[]}, publishPartitions=SeekableStreamEndSequenceNumbers{stream='social_media', partitionSequenceNumberMap={0=230985}}}}].
  20. 2023-07-03T22:11:17,931 INFO [[index_kafka_social_media_0e905aa31037879_nommnaeg]-publish] org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Saved sequence metadata to disk: []
  21. 2023-07-03T22:11:17,932 INFO [task-runner-0-priority-0] org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Handoff complete for segments:
  22. 2023-07-03T22:11:17,932 INFO [task-runner-0-priority-0] org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-kafka-supervisor-dcanhmig-1, groupId=kafka-supervisor-dcanhmig] Resetting generation and member id due to: consumer pro-actively leaving the group
  23. 2023-07-03T22:11:17,932 INFO [task-runner-0-priority-0] org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-kafka-supervisor-dcanhmig-1, groupId=kafka-supervisor-dcanhmig] Request joining group due to: consumer pro-actively leaving the group
  24. 2023-07-03T22:11:17,933 INFO [task-runner-0-priority-0] org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed
  25. 2023-07-03T22:11:17,933 INFO [task-runner-0-priority-0] org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter
  26. 2023-07-03T22:11:17,933 INFO [task-runner-0-priority-0] org.apache.kafka.common.metrics.Metrics - Metrics reporters closed
  27. 2023-07-03T22:11:17,935 INFO [task-runner-0-priority-0] org.apache.kafka.common.utils.AppInfoParser - App info kafka.consumer for consumer-kafka-supervisor-dcanhmig-1 unregistered
  28. 2023-07-03T22:11:17,936 INFO [task-runner-0-priority-0] org.apache.druid.curator.announcement.Announcer - Unannouncing [/druid/internal-discovery/PEON/localhost:8100]
  29. 2023-07-03T22:11:17,972 INFO [task-runner-0-priority-0] org.apache.druid.curator.discovery.CuratorDruidNodeAnnouncer - Unannounced self [{"druidNode":{"service":"druid/middleManager","host":"localhost","bindOnHost":false,"plaintextPort":8100,"port":-1,"tlsPort":-1,"enablePlaintextPort":true,"enableTlsPort":false},"nodeType":"peon","services":{"dataNodeService":{"type":"dataNodeService","tier":"_default_tier","maxSize":0,"type":"indexer-executor","serverType":"indexer-executor","priority":0},"lookupNodeService":{"type":"lookupNodeService","lookupTier":"__default"}}}].
  30. 2023-07-03T22:11:17,972 INFO [task-runner-0-priority-0] org.apache.druid.curator.announcement.Announcer - Unannouncing [/druid/announcements/localhost:8100]
  31. 2023-07-03T22:11:17,996 INFO [task-runner-0-priority-0] org.apache.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {
  32. "id" : "index_kafka_social_media_0e905aa31037879_nommnaeg",
  33. "status" : "SUCCESS",
  34. "duration" : 3601130,
  35. "errorMsg" : null,
  36. "location" : {
  37. "host" : null,
  38. "port" : -1,
  39. "tlsPort" : -1
  40. }
  41. }
  42. 2023-07-03T22:11:17,998 INFO [main] org.apache.druid.java.util.common.lifecycle.Lifecycle - Stopping lifecycle [module] stage [ANNOUNCEMENTS]
  43. 2023-07-03T22:11:18,005 INFO [main] org.apache.druid.java.util.common.lifecycle.Lifecycle - Stopping lifecycle [module] stage [SERVER]
  44. 2023-07-03T22:11:18,009 INFO [main] org.eclipse.jetty.server.AbstractConnector - Stopped ServerConnector@6491006{HTTP/1.1, (http/1.1)}{0.0.0.0:8100}
  45. 2023-07-03T22:11:18,009 INFO [main] org.eclipse.jetty.server.session - node0 Stopped scavenging
  46. 2023-07-03T22:11:18,012 INFO [main] org.eclipse.jetty.server.handler.ContextHandler - Stopped o.e.j.s.ServletContextHandler@742aa00a{/,null,STOPPED}
  47. 2023-07-03T22:11:18,014 INFO [main] org.apache.druid.java.util.common.lifecycle.Lifecycle - Stopping lifecycle [module] stage [NORMAL]
  48. 2023-07-03T22:11:18,014 INFO [main] org.apache.druid.server.coordination.ZkCoordinator - Stopping ZkCoordinator for [DruidServerMetadata{name='localhost:8100', hostAndPort='localhost:8100', hostAndTlsPort='null', maxSize=0, tier='_default_tier', type=indexer-executor, priority=0}]
  49. 2023-07-03T22:11:18,014 INFO [main] org.apache.druid.server.coordination.SegmentLoadDropHandler - Stopping...
  50. 2023-07-03T22:11:18,014 INFO [main] org.apache.druid.server.coordination.SegmentLoadDropHandler - Stopped.
  51. 2023-07-03T22:11:18,014 INFO [main] org.apache.druid.indexing.overlord.SingleTaskBackgroundRunner - Starting graceful shutdown of task[index_kafka_social_media_0e905aa31037879_nommnaeg].
  52. 2023-07-03T22:11:18,014 INFO [main] org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Stopping forcefully (status: [PUBLISHING])
  53. 2023-07-03T22:11:18,019 INFO [LookupExtractorFactoryContainerProvider-MainThread] org.apache.druid.query.lookup.LookupReferencesManager - Lookup Management loop exited. Lookup notices are not handled anymore.
  54. 2023-07-03T22:11:18,020 INFO [main] org.apache.druid.query.lookup.LookupReferencesManager - Closed lookup [name].
  55. 2023-07-03T22:11:18,020 INFO [Curator-Framework-0] org.apache.curator.framework.imps.CuratorFrameworkImpl - backgroundOperationsLoop exiting
  56. 2023-07-03T22:11:18,147 INFO [main] org.apache.zookeeper.ZooKeeper - Session: 0x1000097ceaf0007 closed
  57. 2023-07-03T22:11:18,147 INFO [main-EventThread] org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x1000097ceaf0007
  58. 2023-07-03T22:11:18,151 INFO [main] org.apache.druid.java.util.common.lifecycle.Lifecycle - Stopping lifecycle [module] stage [INIT]
  59. Finished peon task

Get task completion report

Retrieves a task completion report for a task. It returns a JSON object with information about the number of rows ingested, and any parse exceptions that Druid raised.

URL

GET /druid/indexer/v1/task/{taskId}/reports

Responses

  • 200 SUCCESS

Successfully retrieved task report


Sample request

The following examples shows how to retrieve the completion report of a task with the specified ID query-52a8aafe-7265-4427-89fe-dc51275cc470.

  • cURL
  • HTTP
  1. curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/task/query-52a8aafe-7265-4427-89fe-dc51275cc470/reports"
  1. GET /druid/indexer/v1/task/query-52a8aafe-7265-4427-89fe-dc51275cc470/reports HTTP/1.1
  2. Host: http://ROUTER_IP:ROUTER_PORT

Sample response

View the response

  1. {
  2. "ingestionStatsAndErrors": {
  3. "type": "ingestionStatsAndErrors",
  4. "taskId": "query-52a8aafe-7265-4427-89fe-dc51275cc470",
  5. "payload": {
  6. "ingestionState": "COMPLETED",
  7. "unparseableEvents": {},
  8. "rowStats": {
  9. "determinePartitions": {
  10. "processed": 0,
  11. "processedBytes": 0,
  12. "processedWithError": 0,
  13. "thrownAway": 0,
  14. "unparseable": 0
  15. },
  16. "buildSegments": {
  17. "processed": 39244,
  18. "processedBytes": 17106256,
  19. "processedWithError": 0,
  20. "thrownAway": 0,
  21. "unparseable": 0
  22. }
  23. },
  24. "errorMsg": null,
  25. "segmentAvailabilityConfirmed": false,
  26. "segmentAvailabilityWaitTimeMs": 0
  27. }
  28. }
  29. }

Task operations

Submit a task

Submits a JSON-based ingestion spec or supervisor spec to the Overlord. It returns the task ID of the submitted task. For information on creating an ingestion spec, refer to the ingestion spec reference.

Note that for most batch ingestion use cases, you should use the SQL-ingestion API instead of JSON-based batch ingestion.

URL

POST /druid/indexer/v1/task

Responses

  • 200 SUCCESS
  • 400 BAD REQUEST
  • 415 UNSUPPORTED MEDIA TYPE
  • 500 Server Error

Successfully submitted task

Missing information in query

Incorrect request body media type

Unexpected token or characters in request body


Sample request

The following request is an example of submitting a task to create a datasource named "wikipedia auto".

  • cURL
  • HTTP
  1. curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/task" \
  2. --header 'Content-Type: application/json' \
  3. --data '{
  4. "type" : "index_parallel",
  5. "spec" : {
  6. "dataSchema" : {
  7. "dataSource" : "wikipedia_auto",
  8. "timestampSpec": {
  9. "column": "time",
  10. "format": "iso"
  11. },
  12. "dimensionsSpec" : {
  13. "useSchemaDiscovery": true
  14. },
  15. "metricsSpec" : [],
  16. "granularitySpec" : {
  17. "type" : "uniform",
  18. "segmentGranularity" : "day",
  19. "queryGranularity" : "none",
  20. "intervals" : ["2015-09-12/2015-09-13"],
  21. "rollup" : false
  22. }
  23. },
  24. "ioConfig" : {
  25. "type" : "index_parallel",
  26. "inputSource" : {
  27. "type" : "local",
  28. "baseDir" : "quickstart/tutorial/",
  29. "filter" : "wikiticker-2015-09-12-sampled.json.gz"
  30. },
  31. "inputFormat" : {
  32. "type" : "json"
  33. },
  34. "appendToExisting" : false
  35. },
  36. "tuningConfig" : {
  37. "type" : "index_parallel",
  38. "maxRowsPerSegment" : 5000000,
  39. "maxRowsInMemory" : 25000
  40. }
  41. }
  42. }'
  1. POST /druid/indexer/v1/task HTTP/1.1
  2. Host: http://ROUTER_IP:ROUTER_PORT
  3. Content-Type: application/json
  4. Content-Length: 952
  5. {
  6. "type" : "index_parallel",
  7. "spec" : {
  8. "dataSchema" : {
  9. "dataSource" : "wikipedia_auto",
  10. "timestampSpec": {
  11. "column": "time",
  12. "format": "iso"
  13. },
  14. "dimensionsSpec" : {
  15. "useSchemaDiscovery": true
  16. },
  17. "metricsSpec" : [],
  18. "granularitySpec" : {
  19. "type" : "uniform",
  20. "segmentGranularity" : "day",
  21. "queryGranularity" : "none",
  22. "intervals" : ["2015-09-12/2015-09-13"],
  23. "rollup" : false
  24. }
  25. },
  26. "ioConfig" : {
  27. "type" : "index_parallel",
  28. "inputSource" : {
  29. "type" : "local",
  30. "baseDir" : "quickstart/tutorial/",
  31. "filter" : "wikiticker-2015-09-12-sampled.json.gz"
  32. },
  33. "inputFormat" : {
  34. "type" : "json"
  35. },
  36. "appendToExisting" : false
  37. },
  38. "tuningConfig" : {
  39. "type" : "index_parallel",
  40. "maxRowsPerSegment" : 5000000,
  41. "maxRowsInMemory" : 25000
  42. }
  43. }
  44. }

Sample response

View the response

  1. {
  2. "task": "index_parallel_wikipedia_odofhkle_2023-06-23T21:07:28.226Z"
  3. }

Shut down a task

Shuts down a task if it not already complete. Returns a JSON object with the ID of the task that was shut down successfully.

URL

POST /druid/indexer/v1/task/{taskId}/shutdown

Responses

  • 200 SUCCESS
  • 404 NOT FOUND

Successfully shut down task

Cannot find task with ID or task is no longer running


Sample request

The following request shows how to shut down a task with the ID query-52as 8aafe-7265-4427-89fe-dc51275cc470.

  • cURL
  • HTTP
  1. curl --request POST "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/task/query-52as 8aafe-7265-4427-89fe-dc51275cc470/shutdown"
  1. POST /druid/indexer/v1/task/query-52as 8aafe-7265-4427-89fe-dc51275cc470/shutdown HTTP/1.1
  2. Host: http://ROUTER_IP:ROUTER_PORT

Sample response

View the response

  1. {
  2. "task": "query-577a83dd-a14e-4380-bd01-c942b781236b"
  3. }

Shut down all tasks for a datasource

Shuts down all tasks for a specified datasource. If successful, it returns a JSON object with the name of the datasource whose tasks are shut down.

URL

POST /druid/indexer/v1/datasources/{datasource}/shutdownAllTasks

Responses

  • 200 SUCCESS
  • 404 NOT FOUND

Successfully shut down tasks

Error or datasource does not have a running task


Sample request

The following request is an example of shutting down all tasks for datasource wikipedia_auto.

  • cURL
  • HTTP
  1. curl --request POST "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/datasources/wikipedia_auto/shutdownAllTasks"
  1. POST /druid/indexer/v1/datasources/wikipedia_auto/shutdownAllTasks HTTP/1.1
  2. Host: http://ROUTER_IP:ROUTER_PORT

Sample response

View the response

  1. {
  2. "dataSource": "wikipedia_api"
  3. }

Task management

Retrieve status objects for tasks

Retrieves list of task status objects for list of task ID strings in request body. It returns a set of JSON objects with the status, duration, location of each task, and any error messages.

URL

POST /druid/indexer/v1/taskStatus

Responses

  • 200 SUCCESS
  • 415 UNSUPPORTED MEDIA TYPE

Successfully retrieved status objects

Missing request body or incorrect request body type


Sample request

The following request is an example of retrieving status objects for task ID index_parallel_wikipedia_auto_jndhkpbo_2023-06-26T17:23:05.308Z and index_parallel_wikipedia_auto_jbgiianh_2023-06-26T23:17:56.769Z .

  • cURL
  • HTTP
  1. curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/taskStatus" \
  2. --header 'Content-Type: application/json' \
  3. --data '["index_parallel_wikipedia_auto_jndhkpbo_2023-06-26T17:23:05.308Z","index_parallel_wikipedia_auto_jbgiianh_2023-06-26T23:17:56.769Z"]'
  1. POST /druid/indexer/v1/taskStatus HTTP/1.1
  2. Host: http://ROUTER_IP:ROUTER_PORT
  3. Content-Type: application/json
  4. Content-Length: 134
  5. ["index_parallel_wikipedia_auto_jndhkpbo_2023-06-26T17:23:05.308Z", "index_parallel_wikipedia_auto_jbgiianh_2023-06-26T23:17:56.769Z"]

Sample response

View the response

  1. {
  2. "index_parallel_wikipedia_auto_jbgiianh_2023-06-26T23:17:56.769Z": {
  3. "id": "index_parallel_wikipedia_auto_jbgiianh_2023-06-26T23:17:56.769Z",
  4. "status": "SUCCESS",
  5. "duration": 10630,
  6. "errorMsg": null,
  7. "location": {
  8. "host": "localhost",
  9. "port": 8100,
  10. "tlsPort": -1
  11. }
  12. },
  13. "index_parallel_wikipedia_auto_jndhkpbo_2023-06-26T17:23:05.308Z": {
  14. "id": "index_parallel_wikipedia_auto_jndhkpbo_2023-06-26T17:23:05.308Z",
  15. "status": "SUCCESS",
  16. "duration": 11012,
  17. "errorMsg": null,
  18. "location": {
  19. "host": "localhost",
  20. "port": 8100,
  21. "tlsPort": -1
  22. }
  23. }
  24. }

Clean up pending segments for a datasource

Manually clean up pending segments table in metadata storage for datasource. It returns a JSON object response with numDeleted for the number of rows deleted from the pending segments table. This API is used by the druid.coordinator.kill.pendingSegments.on Coordinator setting which automates this operation to perform periodically.

URL

DELETE /druid/indexer/v1/pendingSegments/{datasource}

Responses

  • 200 SUCCESS

Successfully deleted pending segments


Sample request

The following request is an example of cleaning up pending segments for the wikipedia_api datasource.

  • cURL
  • HTTP
  1. curl --request DELETE "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/pendingSegments/wikipedia_api"
  1. DELETE /druid/indexer/v1/pendingSegments/wikipedia_api HTTP/1.1
  2. Host: http://ROUTER_IP:ROUTER_PORT

Sample response

View the response

  1. {
  2. "numDeleted": 2
  3. }