This version of the OpenSearch documentation is no longer maintained. For the latest version, see the current documentation. For information about OpenSearch version maintenance, see Release Schedule and Maintenance Policy.

Bytes

The bytes processor converts a human-readable byte value to its equivalent value in bytes. The field can be a scalar or an array. If the field is a scalar, the value is converted and stored in the field. If the field is an array, all values of the array are converted.

The following is the syntax for the bytes processor:

  1. {
  2. "bytes": {
  3. "field": "your_field_name"
  4. }
  5. }

copy

Configuration parameters

The following table lists the required and optional parameters for the bytes processor.

ParameterRequiredDescription
fieldRequiredThe name of the field where the data should be converted. Supports template snippets.
descriptionOptionalA brief description of the processor.
ifOptionalA condition for running this processor.
ignore_failureOptionalIf set to true, failures are ignored. Default is false.
ignore_missingOptionalIf set to true, the processor does not modify the document if the field does not exist or is null. Default is false.
on_failureOptionalA list of processors to run if the processor fails.
tagOptionalAn identifier tag for the processor. Useful for debugging to distinguish between processors of the same type.
target_fieldOptionalThe name of the field in which to store the parsed data. If not specified, the value will be stored in place in the field field. Default is field.

Using the processor

Follow these steps to use the processor in a pipeline.

Step 1: Create a pipeline.

The following query creates a pipeline, named file_upload, that has one bytes processor. It converts the file_size to its byte equivalent and stores it in a new field named file_size_bytes:

  1. PUT _ingest/pipeline/file_upload
  2. {
  3. "description": "Pipeline that converts file size to bytes",
  4. "processors": [
  5. {
  6. "bytes": {
  7. "field": "file_size",
  8. "target_field": "file_size_bytes"
  9. }
  10. }
  11. ]
  12. }

copy

Step 2 (Optional): Test the pipeline.

It is recommended that you test your pipeline before you ingest documents.

To test the pipeline, run the following query:

  1. POST _ingest/pipeline/file_upload/_simulate
  2. {
  3. "docs": [
  4. {
  5. "_index": "testindex1",
  6. "_id": "1",
  7. "_source": {
  8. "file_size_bytes": "10485760",
  9. "file_size":
  10. "10MB"
  11. }
  12. }
  13. ]
  14. }

copy

Response

The following response confirms that the pipeline is working as expected:

  1. {
  2. "docs": [
  3. {
  4. "doc": {
  5. "_index": "testindex1",
  6. "_id": "1",
  7. "_source": {
  8. "event_types": [
  9. "event_type"
  10. ],
  11. "file_size_bytes": "10485760",
  12. "file_size": "10MB"
  13. },
  14. "_ingest": {
  15. "timestamp": "2023-08-22T16:09:42.771569211Z"
  16. }
  17. }
  18. }
  19. ]
  20. }

Step 3: Ingest a document.

The following query ingests a document into an index named testindex1:

  1. PUT testindex1/_doc/1?pipeline=file_upload
  2. {
  3. "file_size": "10MB"
  4. }

copy

Step 4 (Optional): Retrieve the document.

To retrieve the document, run the following query:

  1. GET testindex1/_doc/1

copy