Text chunking processor

The text_chunking processor splits a long document into shorter passages. The processor supports the following algorithms for text splitting:

The following is the syntax for the text_chunking processor:

  1. {
  2. "text_chunking": {
  3. "field_map": {
  4. "<input_field>": "<output_field>"
  5. },
  6. "algorithm": {
  7. "<name>": "<parameters>"
  8. }
  9. }
  10. }

Configuration parameters

The following table lists the required and optional parameters for the text_chunking processor.

ParameterData typeRequired/OptionalDescription
field_mapObjectRequiredContains key-value pairs that specify the mapping of a text field to the output field.
field_map.<input_field>StringRequiredThe name of the field from which to obtain text for generating chunked passages.
field_map.<output_field>StringRequiredThe name of the field in which to store the chunked results.
algorithmObjectRequiredContains at most one key-value pair that specifies the chunking algorithm and parameters.
algorithm.<name>StringOptionalThe name of the chunking algorithm. Valid values are fixed_token_length or delimiter. Default is fixed_token_length.
algorithm.<parameters>ObjectOptionalThe parameters for the chunking algorithm. By default, contains the default parameters of the fixed_token_length algorithm.
ignore_missingBooleanOptionalIf true, empty fields are excluded from the output. If false, the output will contain an empty list for every empty field. Default is false.
descriptionStringOptionalA brief description of the processor.
tagStringOptionalAn identifier tag for the processor. Useful when debugging in order to distinguish between processors of the same type.

To perform chunking on nested fields, specify input_field and output_field values as JSON objects. Dot paths of nested fields are not supported. For example, use "field_map": { "foo": { "bar": "bar_chunk"} } instead of "field_map": { "foo.bar": "foo.bar_chunk"}.

Fixed token length algorithm

The following table lists the optional parameters for the fixed_token_length algorithm.

ParameterData typeRequired/OptionalDescription
token_limitIntegerOptionalThe token limit for chunking algorithms. Valid values are integers of at least 1. Default is 384.
tokenizerStringOptionalThe word tokenizer name. Default is standard.
overlap_rateFloatOptionalThe degree of overlap in the token algorithm. Valid values are floats between 0 and 0.5, inclusive. Default is 0.
max_chunk_limitIntegerOptionalThe chunk limit for chunking algorithms. Default is 100. To disable this parameter, set it to -1.

The default value of token_limit is 384 so that output passages don’t exceed the token limit constraint of the downstream text embedding models. For OpenSearch-supported pretrained models, like msmarco-distilbert-base-tas-b and opensearch-neural-sparse-encoding-v1, the input token limit is 512. The standard tokenizer tokenizes text into words. According to OpenAI, 1 token equals approximately 0.75 words of English text. The default token limit is calculated as 512 * 0.75 = 384.

You can set the overlap_rate to a decimal percentage value in the 0–0.5 range, inclusive. Per Amazon Bedrock, we recommend setting this parameter to a value of 0–0.2 to improve accuracy.

The max_chunk_limit parameter limits the number of chunked passages. If the number of passages generated by the processor exceeds the limit, the algorithm will return an exception, prompting you to either increase or disable the limit.

Delimiter algorithm

The following table lists the optional parameters for the delimiter algorithm.

ParameterData typeRequired/OptionalDescription
delimiterStringOptionalA string delimiter used to split text. You can set the delimiter to any string, for example, \n (split text into paragraphs on a new line) or . (split text into sentences). Default is \n\n (split text into paragraphs on two new line characters).
max_chunk_limitIntegerOptionalThe chunk limit for chunking algorithms. Default is 100. To disable this parameter, set it to -1.

The max_chunk_limit parameter limits the number of chunked passages. If the number of passages generated by the processor exceeds the limit, the algorithm will return an exception, prompting you to either increase or disable the limit.

Using the processor

Follow these steps to use the processor in a pipeline. You can specify the chunking algorithm when creating the processor. If you don’t provide an algorithm name, the chunking processor will use the default fixed_token_length algorithm along with all its default parameters.

Step 1: Create a pipeline

The following example request creates an ingest pipeline that converts the text in the passage_text field into chunked passages, which will be stored in the passage_chunk field:

  1. PUT _ingest/pipeline/text-chunking-ingest-pipeline
  2. {
  3. "description": "A text chunking ingest pipeline",
  4. "processors": [
  5. {
  6. "text_chunking": {
  7. "algorithm": {
  8. "fixed_token_length": {
  9. "token_limit": 10,
  10. "overlap_rate": 0.2,
  11. "tokenizer": "standard"
  12. }
  13. },
  14. "field_map": {
  15. "passage_text": "passage_chunk"
  16. }
  17. }
  18. }
  19. ]
  20. }

copy

Step 2 (Optional): Test the pipeline

It is recommended that you test your pipeline before ingesting documents.

To test the pipeline, run the following query:

  1. POST _ingest/pipeline/text-chunking-ingest-pipeline/_simulate
  2. {
  3. "docs": [
  4. {
  5. "_index": "testindex",
  6. "_id": "1",
  7. "_source":{
  8. "passage_text": "This is an example document to be chunked. The document contains a single paragraph, two sentences and 24 tokens by standard tokenizer in OpenSearch."
  9. }
  10. }
  11. ]
  12. }

copy

Response

The response confirms that, in addition to the passage_text field, the processor has generated chunking results in the passage_chunk field. The processor split the paragraph into 10-word chunks. Because of the overlap setting of 0.2, the last 2 words of a chunk are duplicated in the following chunk:

  1. {
  2. "docs": [
  3. {
  4. "doc": {
  5. "_index": "testindex",
  6. "_id": "1",
  7. "_source": {
  8. "passage_text": "This is an example document to be chunked. The document contains a single paragraph, two sentences and 24 tokens by standard tokenizer in OpenSearch.",
  9. "passage_chunk": [
  10. "This is an example document to be chunked. The document ",
  11. "The document contains a single paragraph, two sentences and 24 ",
  12. "and 24 tokens by standard tokenizer in OpenSearch."
  13. ]
  14. },
  15. "_ingest": {
  16. "timestamp": "2024-03-20T02:55:25.642366Z"
  17. }
  18. }
  19. }
  20. ]
  21. }

Once you have created an ingest pipeline, you need to create an index for document ingestion. To learn more, see Text chunking.

Cascaded text chunking processors

You can chain multiple text chunking processors together. For example, to split documents into paragraphs, apply the delimiter algorithm and specify the parameter as \n\n. To prevent a paragraph from exceeding the token limit, append another text chunking processor that uses the fixed_token_length algorithm. You can configure the ingest pipeline for this example as follows:

  1. PUT _ingest/pipeline/text-chunking-cascade-ingest-pipeline
  2. {
  3. "description": "A text chunking pipeline with cascaded algorithms",
  4. "processors": [
  5. {
  6. "text_chunking": {
  7. "algorithm": {
  8. "delimiter": {
  9. "delimiter": "\n\n"
  10. }
  11. },
  12. "field_map": {
  13. "passage_text": "passage_chunk1"
  14. }
  15. }
  16. },
  17. {
  18. "text_chunking": {
  19. "algorithm": {
  20. "fixed_token_length": {
  21. "token_limit": 500,
  22. "overlap_rate": 0.2,
  23. "tokenizer": "standard"
  24. }
  25. },
  26. "field_map": {
  27. "passage_chunk1": "passage_chunk2"
  28. }
  29. }
  30. }
  31. ]
  32. }

copy

Next steps