Split processor

The split processor is used to split a string field into an array of substrings based on a specified delimiter.

The following is the syntax for the split processor:

  1. {
  2. "split": {
  3. "field": "field_to_split",
  4. "separator": "<delimiter>",
  5. "target_field": "split_field"
  6. }
  7. }

copy

Configuration parameters

The following table lists the required and optional parameters for the split processor.

ParameterRequired/OptionalDescription
fieldRequiredThe field containing the string to be split.
separatorRequiredThe delimiter used to split the string. This can be a regular expression pattern.
preserve_trailingOptionalIf set to true, preserves empty trailing fields (for example, ‘’) in the resulting array. If set to false, then empty trailing fields are removed from the resulting array. Default is false.
target_fieldOptionalThe field where the array of substrings is stored. If not specified, then the field is updated in-place.
ignore_missingOptionalSpecifies whether the processor should ignore documents that do not contain the specified field. If set to true, then the processor ignores missing values in the field and leaves the target_field unchanged. Default is false.
descriptionOptionalA brief description of the processor.
ifOptionalA condition for running the processor.
ignore_failureOptionalSpecifies whether the processor continues execution even if it encounters an error. If set to true, then failures are ignored. Default is false.
on_failureOptionalA list of processors to run if the processor fails.
tagOptionalAn identifier tag for the processor. Useful for debugging in order to distinguish between processors of the same type.

Using the processor

Follow these steps to use the processor in a pipeline.

Step 1: Create a pipeline

The following query creates a pipeline named split_pipeline that uses the split processor to split the log_message field on the comma character and store the resulting array in the log_parts field:

  1. PUT _ingest/pipeline/split_pipeline
  2. {
  3. "description": "Split log messages by comma",
  4. "processors": [
  5. {
  6. "split": {
  7. "field": "log_message",
  8. "separator": ",",
  9. "target_field": "log_parts"
  10. }
  11. }
  12. ]
  13. }

copy

Step 2 (Optional): Test the pipeline

It is recommended that you test your pipeline before you ingest documents.

To test the pipeline, run the following query:

  1. POST _ingest/pipeline/split_pipeline/_simulate
  2. {
  3. "docs": [
  4. {
  5. "_source": {
  6. "log_message": "error,warning,info"
  7. }
  8. }
  9. ]
  10. }

copy

Response

The following example response confirms that the pipeline is working as expected:

  1. {
  2. "docs": [
  3. {
  4. "doc": {
  5. "_index": "_index",
  6. "_id": "_id",
  7. "_source": {
  8. "log_message": "error,warning,info",
  9. "log_parts": [
  10. "error",
  11. "warning",
  12. "info"
  13. ]
  14. },
  15. "_ingest": {
  16. "timestamp": "2024-04-26T22:29:23.207849376Z"
  17. }
  18. }
  19. }
  20. ]
  21. }

copy

Step 3: Ingest a document

The following query ingests a document into an index named testindex1:

  1. PUT testindex1/_doc/1?pipeline=split_pipeline
  2. {
  3. "log_message": "error,warning,info"
  4. }

copy

Response

The request indexes the document into the index testindex1 and splits the log_message field on the comma delimiter before indexing, as shown in the following response:

  1. {
  2. "_index": "testindex1",
  3. "_id": "1",
  4. "_version": 70,
  5. "result": "updated",
  6. "_shards": {
  7. "total": 2,
  8. "successful": 1,
  9. "failed": 0
  10. },
  11. "_seq_no": 72,
  12. "_primary_term": 47
  13. }

Step 4 (Optional): Retrieve the document

To retrieve the document, run the following query:

  1. GET testindex1/_doc/1

copy

Response

The response shows the log_message field as an array of values split on the comma delimiter:

  1. {
  2. "_index": "testindex1",
  3. "_id": "1",
  4. "_version": 70,
  5. "_seq_no": 72,
  6. "_primary_term": 47,
  7. "found": true,
  8. "_source": {
  9. "log_message": "error,warning,info",
  10. "log_parts": [
  11. "error",
  12. "warning",
  13. "info"
  14. ]
  15. }
  16. }