This documentation describes using the append processor in OpenSearch ingest pipelines. Consider using the Data Prepper add_entries processor, which runs on the OpenSearch cluster, if your use case involves large or complex datasets.

Append processor

The append processor is used to add values to a field:

  • If the field is an array, the append processor appends the specified values to that array.
  • If the field is a scalar field, the append processor converts it to an array and appends the specified values to that array.
  • If the field does not exist, the append processor creates an array with the specified values.

Syntax

The following is the syntax for the append processor:

  1. {
  2. "append": {
  3. "field": "your_target_field",
  4. "value": ["your_appended_value"]
  5. }
  6. }

copy

Configuration parameters

The following table lists the required and optional parameters for the append processor.

ParameterRequired/OptionalDescription
fieldRequiredThe name of the field containing the data to be appended. Supports template snippets.
valueRequiredThe value to be appended. This can be a static value or a dynamic value derived from existing fields. Supports template snippets.
descriptionOptionalA brief description of the processor.
ifOptionalA condition for running the processor.
ignore_failureOptionalSpecifies whether the processor continues execution even if it encounters errors. If set to true, failures are ignored. Default is false.
allow_duplicatesOptionalSpecifies whether to append the values already contained in the field. If true, duplicate values are appended. Otherwise, they are skipped.
on_failureOptionalA list of processors to run if the processor fails.
tagOptionalAn identifier tag for the processor. Useful for debugging in order to distinguish between processors of the same type.

Using the processor

Follow these steps to use the processor in a pipeline.

Step 1: Create a pipeline

The following query creates a pipeline, named user-behavior, that has one append processor. It appends the page_view of each new document ingested into OpenSearch to an array field named event_types:

  1. PUT _ingest/pipeline/user-behavior
  2. {
  3. "description": "Pipeline that appends event type",
  4. "processors": [
  5. {
  6. "append": {
  7. "field": "event_types",
  8. "value": ["page_view"]
  9. }
  10. }
  11. ]
  12. }

copy

Step 2 (Optional): Test the pipeline

It is recommended that you test your pipeline before you ingest documents.

To test the pipeline, run the following query:

  1. POST _ingest/pipeline/user-behavior/_simulate
  2. {
  3. "docs":[
  4. {
  5. "_source":{
  6. }
  7. }
  8. ]
  9. }

copy

Response

The following response confirms that the pipeline is working as expected:

  1. {
  2. "docs": [
  3. {
  4. "doc": {
  5. "_index": "_index",
  6. "_id": "_id",
  7. "_source": {
  8. "event_types": [
  9. "page_view"
  10. ]
  11. },
  12. "_ingest": {
  13. "timestamp": "2023-08-28T16:55:10.621805166Z"
  14. }
  15. }
  16. }
  17. ]
  18. }

Step 3: Ingest a document

The following query ingests a document into an index named testindex1:

  1. PUT testindex1/_doc/1?pipeline=user-behavior
  2. {
  3. }

copy

Step 4 (Optional): Retrieve the document

To retrieve the document, run the following query:

  1. GET testindex1/_doc/1

copy

Because the document does not contain an event_types field, an array field is created and the event is appended to the array:

  1. {
  2. "_index": "testindex1",
  3. "_id": "1",
  4. "_version": 2,
  5. "_seq_no": 1,
  6. "_primary_term": 1,
  7. "found": true,
  8. "_source": {
  9. "event_types": [
  10. "page_view"
  11. ]
  12. }
  13. }