Automatic compaction API

This topic describes the status and configuration API endpoints for automatic compaction using Coordinator duties in Apache Druid. You can configure automatic compaction in the Druid web console or API.

Automatic compaction - 图1Experimental

Instead of the automatic compaction API, you can use the supervisor API to submit auto-compaction jobs using compaction supervisors. For more information, see Auto-compaction using compaction supervisors.

In this topic, http://ROUTER_IP:ROUTER_PORT is a placeholder for your Router service address and port. Replace it with the information for your deployment. For example, use http://localhost:8888 for quickstart deployments.

Manage automatic compaction

Create or update automatic compaction configuration

Creates or updates the automatic compaction configuration for a datasource. Pass the automatic compaction as a JSON object in the request body.

The automatic compaction configuration requires only the dataSource property. Druid fills all other properties with default values if not specified. See Automatic compaction dynamic configuration for configuration details.

Note that this endpoint returns an HTTP 200 OK message code even if the datasource name does not exist.

URL

POST /druid/coordinator/v1/config/compaction

Responses

  • 200 SUCCESS

Successfully submitted auto compaction configuration


Sample request

The following example creates an automatic compaction configuration for the datasource wikipedia_hour, which was ingested with HOUR segment granularity. This automatic compaction configuration performs compaction on wikipedia_hour, resulting in compacted segments that represent a day interval of data.

In this example:

  • wikipedia_hour is a datasource with HOUR segment granularity.
  • skipOffsetFromLatest is set to PT0S, meaning that no data is skipped.
  • partitionsSpec is set to the default dynamic, allowing Druid to dynamically determine the optimal partitioning strategy.
  • type is set to index_parallel, meaning that parallel indexing is used.
  • segmentGranularity is set to DAY, meaning that each compacted segment is a day of data.

  • cURL

  • HTTP
  1. curl "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/config/compaction"\
  2. --header 'Content-Type: application/json' \
  3. --data '{
  4. "dataSource": "wikipedia_hour",
  5. "skipOffsetFromLatest": "PT0S",
  6. "tuningConfig": {
  7. "partitionsSpec": {
  8. "type": "dynamic"
  9. },
  10. "type": "index_parallel"
  11. },
  12. "granularitySpec": {
  13. "segmentGranularity": "DAY"
  14. }
  15. }'
  1. POST /druid/coordinator/v1/config/compaction HTTP/1.1
  2. Host: http://ROUTER_IP:ROUTER_PORT
  3. Content-Type: application/json
  4. Content-Length: 281
  5. {
  6. "dataSource": "wikipedia_hour",
  7. "skipOffsetFromLatest": "PT0S",
  8. "tuningConfig": {
  9. "partitionsSpec": {
  10. "type": "dynamic"
  11. },
  12. "type": "index_parallel"
  13. },
  14. "granularitySpec": {
  15. "segmentGranularity": "DAY"
  16. }
  17. }

Sample response

A successful request returns an HTTP 200 OK message code and an empty response body.

Remove automatic compaction configuration

Removes the automatic compaction configuration for a datasource. This updates the compaction status of the datasource to “Not enabled.”

URL

DELETE /druid/coordinator/v1/config/compaction/{dataSource}

Responses

  • 200 SUCCESS
  • 404 NOT FOUND

Successfully deleted automatic compaction configuration

Datasource does not have automatic compaction or invalid datasource name


Sample request

  • cURL
  • HTTP
  1. curl --request DELETE "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/config/compaction/wikipedia_hour"
  1. DELETE /druid/coordinator/v1/config/compaction/wikipedia_hour HTTP/1.1
  2. Host: http://ROUTER_IP:ROUTER_PORT

Sample response

A successful request returns an HTTP 200 OK message code and an empty response body.

Update capacity for compaction tasks

Updates the capacity for compaction tasks. The minimum number of compaction tasks is 1 and the maximum is 2147483647.

Note that while the max compaction tasks can theoretically be set to 2147483647, the practical limit is determined by the available cluster capacity and is capped at 10% of the cluster’s total capacity.

URL

POST /druid/coordinator/v1/config/compaction/taskslots

Query parameters

To limit the maximum number of compaction tasks, use the optional query parameters ratio and max:

  • ratio (optional)
    • Type: Float
    • Default: 0.1
    • Limits the ratio of the total task slots to compaction task slots.
  • max (optional)
    • Type: Int
    • Default: 2147483647
    • Limits the maximum number of task slots for compaction tasks.

Responses

  • 200 SUCCESS
  • 404 NOT FOUND

Successfully updated compaction configuration

Invalid max value


Sample request

  • cURL
  • HTTP
  1. curl --request POST "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/config/compaction/taskslots?ratio=0.2&max=250000"
  1. POST /druid/coordinator/v1/config/compaction/taskslots?ratio=0.2&max=250000 HTTP/1.1
  2. Host: http://ROUTER_IP:ROUTER_PORT

Sample response

A successful request returns an HTTP 200 OK message code and an empty response body.

View automatic compaction configuration

Get all automatic compaction configurations

Retrieves all automatic compaction configurations. Returns a compactionConfigs object containing the active automatic compaction configurations of all datasources.

You can use this endpoint to retrieve compactionTaskSlotRatio and maxCompactionTaskSlots values for managing resource allocation of compaction tasks.

URL

GET /druid/coordinator/v1/config/compaction

Responses

  • 200 SUCCESS

Successfully retrieved automatic compaction configurations


Sample request

  • cURL
  • HTTP
  1. curl "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/config/compaction"
  1. GET /druid/coordinator/v1/config/compaction HTTP/1.1
  2. Host: http://ROUTER_IP:ROUTER_PORT

Sample response

View the response

  1. {
  2. "compactionConfigs": [
  3. {
  4. "dataSource": "wikipedia_hour",
  5. "taskPriority": 25,
  6. "inputSegmentSizeBytes": 100000000000000,
  7. "maxRowsPerSegment": null,
  8. "skipOffsetFromLatest": "PT0S",
  9. "tuningConfig": {
  10. "maxRowsInMemory": null,
  11. "appendableIndexSpec": null,
  12. "maxBytesInMemory": null,
  13. "maxTotalRows": null,
  14. "splitHintSpec": null,
  15. "partitionsSpec": {
  16. "type": "dynamic",
  17. "maxRowsPerSegment": 5000000,
  18. "maxTotalRows": null
  19. },
  20. "indexSpec": null,
  21. "indexSpecForIntermediatePersists": null,
  22. "maxPendingPersists": null,
  23. "pushTimeout": null,
  24. "segmentWriteOutMediumFactory": null,
  25. "maxNumConcurrentSubTasks": null,
  26. "maxRetry": null,
  27. "taskStatusCheckPeriodMs": null,
  28. "chatHandlerTimeout": null,
  29. "chatHandlerNumRetries": null,
  30. "maxNumSegmentsToMerge": null,
  31. "totalNumMergeTasks": null,
  32. "maxColumnsToMerge": null,
  33. "type": "index_parallel",
  34. "forceGuaranteedRollup": false
  35. },
  36. "granularitySpec": {
  37. "segmentGranularity": "DAY",
  38. "queryGranularity": null,
  39. "rollup": null
  40. },
  41. "dimensionsSpec": null,
  42. "metricsSpec": null,
  43. "transformSpec": null,
  44. "ioConfig": null,
  45. "taskContext": null
  46. },
  47. {
  48. "dataSource": "wikipedia",
  49. "taskPriority": 25,
  50. "inputSegmentSizeBytes": 100000000000000,
  51. "maxRowsPerSegment": null,
  52. "skipOffsetFromLatest": "PT0S",
  53. "tuningConfig": {
  54. "maxRowsInMemory": null,
  55. "appendableIndexSpec": null,
  56. "maxBytesInMemory": null,
  57. "maxTotalRows": null,
  58. "splitHintSpec": null,
  59. "partitionsSpec": {
  60. "type": "dynamic",
  61. "maxRowsPerSegment": 5000000,
  62. "maxTotalRows": null
  63. },
  64. "indexSpec": null,
  65. "indexSpecForIntermediatePersists": null,
  66. "maxPendingPersists": null,
  67. "pushTimeout": null,
  68. "segmentWriteOutMediumFactory": null,
  69. "maxNumConcurrentSubTasks": null,
  70. "maxRetry": null,
  71. "taskStatusCheckPeriodMs": null,
  72. "chatHandlerTimeout": null,
  73. "chatHandlerNumRetries": null,
  74. "maxNumSegmentsToMerge": null,
  75. "totalNumMergeTasks": null,
  76. "maxColumnsToMerge": null,
  77. "type": "index_parallel",
  78. "forceGuaranteedRollup": false
  79. },
  80. "granularitySpec": {
  81. "segmentGranularity": "DAY",
  82. "queryGranularity": null,
  83. "rollup": null
  84. },
  85. "dimensionsSpec": null,
  86. "metricsSpec": null,
  87. "transformSpec": null,
  88. "ioConfig": null,
  89. "taskContext": null
  90. }
  91. ],
  92. "compactionTaskSlotRatio": 0.1,
  93. "maxCompactionTaskSlots": 2147483647,
  94. "useAutoScaleSlots": false
  95. }

Get automatic compaction configuration

Retrieves the automatic compaction configuration for a datasource.

URL

GET /druid/coordinator/v1/config/compaction/{dataSource}

Responses

  • 200 SUCCESS
  • 404 NOT FOUND

Successfully retrieved configuration for datasource

Invalid datasource or datasource does not have automatic compaction enabled


Sample request

The following example retrieves the automatic compaction configuration for datasource wikipedia_hour.

  • cURL
  • HTTP
  1. curl "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/config/compaction/wikipedia_hour"
  1. GET /druid/coordinator/v1/config/compaction/wikipedia_hour HTTP/1.1
  2. Host: http://ROUTER_IP:ROUTER_PORT

Sample response

View the response

  1. {
  2. "dataSource": "wikipedia_hour",
  3. "taskPriority": 25,
  4. "inputSegmentSizeBytes": 100000000000000,
  5. "maxRowsPerSegment": null,
  6. "skipOffsetFromLatest": "PT0S",
  7. "tuningConfig": {
  8. "maxRowsInMemory": null,
  9. "appendableIndexSpec": null,
  10. "maxBytesInMemory": null,
  11. "maxTotalRows": null,
  12. "splitHintSpec": null,
  13. "partitionsSpec": {
  14. "type": "dynamic",
  15. "maxRowsPerSegment": 5000000,
  16. "maxTotalRows": null
  17. },
  18. "indexSpec": null,
  19. "indexSpecForIntermediatePersists": null,
  20. "maxPendingPersists": null,
  21. "pushTimeout": null,
  22. "segmentWriteOutMediumFactory": null,
  23. "maxNumConcurrentSubTasks": null,
  24. "maxRetry": null,
  25. "taskStatusCheckPeriodMs": null,
  26. "chatHandlerTimeout": null,
  27. "chatHandlerNumRetries": null,
  28. "maxNumSegmentsToMerge": null,
  29. "totalNumMergeTasks": null,
  30. "maxColumnsToMerge": null,
  31. "type": "index_parallel",
  32. "forceGuaranteedRollup": false
  33. },
  34. "granularitySpec": {
  35. "segmentGranularity": "DAY",
  36. "queryGranularity": null,
  37. "rollup": null
  38. },
  39. "dimensionsSpec": null,
  40. "metricsSpec": null,
  41. "transformSpec": null,
  42. "ioConfig": null,
  43. "taskContext": null
  44. }

Get automatic compaction configuration history

Retrieves the history of the automatic compaction configuration for a datasource. Returns an empty list if the datasource does not exist or there is no compaction history for the datasource.

The response contains a list of objects with the following keys:

  • globalConfig: A JSON object containing automatic compaction configuration that applies to the entire cluster.
  • compactionConfig: A JSON object containing the automatic compaction configuration for the datasource.
  • auditInfo: A JSON object containing information about the change made, such as author, comment or ip.
  • auditTime: The date and time when the change was made.

URL

GET /druid/coordinator/v1/config/compaction/{dataSource}/history

Query parameters

  • interval (optional)
    • Type: ISO-8601
    • Limits the results within a specified interval. Use / as the delimiter for the interval string.
  • count (optional)
    • Type: Int
    • Limits the number of results.

Responses

  • 200 SUCCESS
  • 400 BAD REQUEST

Successfully retrieved configuration history

Invalid count value


Sample request

  • cURL
  • HTTP
  1. curl "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/config/compaction/wikipedia_hour/history"
  1. GET /druid/coordinator/v1/config/compaction/wikipedia_hour/history HTTP/1.1
  2. Host: http://ROUTER_IP:ROUTER_PORT

Sample response

View the response

  1. [
  2. {
  3. "globalConfig": {
  4. "compactionTaskSlotRatio": 0.1,
  5. "maxCompactionTaskSlots": 2147483647,
  6. "useAutoScaleSlots": false
  7. },
  8. "compactionConfig": {
  9. "dataSource": "wikipedia_hour",
  10. "taskPriority": 25,
  11. "inputSegmentSizeBytes": 100000000000000,
  12. "maxRowsPerSegment": null,
  13. "skipOffsetFromLatest": "P1D",
  14. "tuningConfig": null,
  15. "granularitySpec": {
  16. "segmentGranularity": "DAY",
  17. "queryGranularity": null,
  18. "rollup": null
  19. },
  20. "dimensionsSpec": null,
  21. "metricsSpec": null,
  22. "transformSpec": null,
  23. "ioConfig": null,
  24. "taskContext": null
  25. },
  26. "auditInfo": {
  27. "author": "",
  28. "comment": "",
  29. "ip": "127.0.0.1"
  30. },
  31. "auditTime": "2023-07-31T18:15:19.302Z"
  32. },
  33. {
  34. "globalConfig": {
  35. "compactionTaskSlotRatio": 0.1,
  36. "maxCompactionTaskSlots": 2147483647,
  37. "useAutoScaleSlots": false
  38. },
  39. "compactionConfig": {
  40. "dataSource": "wikipedia_hour",
  41. "taskPriority": 25,
  42. "inputSegmentSizeBytes": 100000000000000,
  43. "maxRowsPerSegment": null,
  44. "skipOffsetFromLatest": "PT0S",
  45. "tuningConfig": {
  46. "maxRowsInMemory": null,
  47. "appendableIndexSpec": null,
  48. "maxBytesInMemory": null,
  49. "maxTotalRows": null,
  50. "splitHintSpec": null,
  51. "partitionsSpec": {
  52. "type": "dynamic",
  53. "maxRowsPerSegment": 5000000,
  54. "maxTotalRows": null
  55. },
  56. "indexSpec": null,
  57. "indexSpecForIntermediatePersists": null,
  58. "maxPendingPersists": null,
  59. "pushTimeout": null,
  60. "segmentWriteOutMediumFactory": null,
  61. "maxNumConcurrentSubTasks": null,
  62. "maxRetry": null,
  63. "taskStatusCheckPeriodMs": null,
  64. "chatHandlerTimeout": null,
  65. "chatHandlerNumRetries": null,
  66. "maxNumSegmentsToMerge": null,
  67. "totalNumMergeTasks": null,
  68. "maxColumnsToMerge": null,
  69. "type": "index_parallel",
  70. "forceGuaranteedRollup": false
  71. },
  72. "granularitySpec": {
  73. "segmentGranularity": "DAY",
  74. "queryGranularity": null,
  75. "rollup": null
  76. },
  77. "dimensionsSpec": null,
  78. "metricsSpec": null,
  79. "transformSpec": null,
  80. "ioConfig": null,
  81. "taskContext": null
  82. },
  83. "auditInfo": {
  84. "author": "",
  85. "comment": "",
  86. "ip": "127.0.0.1"
  87. },
  88. "auditTime": "2023-07-31T18:16:16.362Z"
  89. }
  90. ]

View automatic compaction status

Get segments awaiting compaction

Returns the total size of segments awaiting compaction for a given datasource. Returns a 404 response if a datasource does not have automatic compaction enabled.

URL

GET /druid/coordinator/v1/compaction/progress?dataSource={dataSource}

Query parameter

  • dataSource (required)
    • Type: String
    • Name of the datasource for this status information.

Responses

  • 200 SUCCESS
  • 404 NOT FOUND

Successfully retrieved segment size awaiting compaction

Unknown datasource name or datasource does not have automatic compaction enabled


Sample request

The following example retrieves the remaining segments to be compacted for datasource wikipedia_hour.

  • cURL
  • HTTP
  1. curl "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/compaction/progress?dataSource=wikipedia_hour"
  1. GET /druid/coordinator/v1/compaction/progress?dataSource=wikipedia_hour HTTP/1.1
  2. Host: http://ROUTER_IP:ROUTER_PORT

Sample response

View the response

  1. {
  2. "remainingSegmentSize": 7615837
  3. }

Get compaction status and statistics

Retrieves an array of latestStatus objects representing the status and statistics from the latest automatic compaction run for all datasources with automatic compaction enabled.

The latestStatus object has the following properties:

  • dataSource: Name of the datasource for this status information.
  • scheduleStatus: Automatic compaction scheduling status. Possible values are NOT_ENABLED and RUNNING. Returns RUNNING if the datasource has an active automatic compaction configuration submitted. Otherwise, returns NOT_ENABLED.
  • bytesAwaitingCompaction: Total bytes of this datasource waiting to be compacted by the automatic compaction (only consider intervals/segments that are eligible for automatic compaction).
  • bytesCompacted: Total bytes of this datasource that are already compacted with the spec set in the automatic compaction configuration.
  • bytesSkipped: Total bytes of this datasource that are skipped (not eligible for automatic compaction) by the automatic compaction.
  • segmentCountAwaitingCompaction: Total number of segments of this datasource waiting to be compacted by the automatic compaction (only consider intervals/segments that are eligible for automatic compaction).
  • segmentCountCompacted: Total number of segments of this datasource that are already compacted with the spec set in the automatic compaction configuration.
  • segmentCountSkipped: Total number of segments of this datasource that are skipped (not eligible for automatic compaction) by the automatic compaction.
  • intervalCountAwaitingCompaction: Total number of intervals of this datasource waiting to be compacted by the automatic compaction (only consider intervals/segments that are eligible for automatic compaction).
  • intervalCountCompacted: Total number of intervals of this datasource that are already compacted with the spec set in the automatic compaction configuration.
  • intervalCountSkipped: Total number of intervals of this datasource that are skipped (not eligible for automatic compaction) by the automatic compaction.

URL

GET /druid/coordinator/v1/compaction/status

Query parameters

  • dataSource (optional)
    • Type: String
    • Filter the result by name of a specific datasource.

Responses

  • 200 SUCCESS

Successfully retrieved latestStatus object


Sample request

  • cURL
  • HTTP
  1. curl "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/compaction/status"
  1. GET /druid/coordinator/v1/compaction/status HTTP/1.1
  2. Host: http://ROUTER_IP:ROUTER_PORT

Sample response

View the response

  1. {
  2. "latestStatus": [
  3. {
  4. "dataSource": "wikipedia_api",
  5. "scheduleStatus": "RUNNING",
  6. "bytesAwaitingCompaction": 0,
  7. "bytesCompacted": 0,
  8. "bytesSkipped": 64133616,
  9. "segmentCountAwaitingCompaction": 0,
  10. "segmentCountCompacted": 0,
  11. "segmentCountSkipped": 8,
  12. "intervalCountAwaitingCompaction": 0,
  13. "intervalCountCompacted": 0,
  14. "intervalCountSkipped": 1
  15. },
  16. {
  17. "dataSource": "wikipedia_hour",
  18. "scheduleStatus": "RUNNING",
  19. "bytesAwaitingCompaction": 0,
  20. "bytesCompacted": 5998634,
  21. "bytesSkipped": 0,
  22. "segmentCountAwaitingCompaction": 0,
  23. "segmentCountCompacted": 1,
  24. "segmentCountSkipped": 0,
  25. "intervalCountAwaitingCompaction": 0,
  26. "intervalCountCompacted": 1,
  27. "intervalCountSkipped": 0
  28. }
  29. ]
  30. }