s3
Overview
This is a source plugin that reads events from Amazon Simple Storage Service (Amazon S3) objects.
Option | Required | Type | Description |
---|---|---|---|
notification_type | Yes | String | Must be sqs |
compression | No | String | The compression algorithm to apply: none , gzip , or automatic . Default is none . |
codec | Yes | Codec | The codec to apply. Must be newline , json , or csv . |
sqs | Yes | sqs | The Amazon Simple Queue Service (Amazon SQS) configuration. See sqs for details. |
aws | Yes | aws | The AWS configuration. See aws for details. |
on_error | No | String | Determines how to handle errors in Amazon SQS. Can be either retain_messages or delete_messages . If retain_messages , then Data Prepper will leave the message in the SQS queue and try again. This is recommended for dead-letter queues. If delete_messages , then Data Prepper will delete failed messages. Default is retain_messages . |
buffer_timeout | No | Duration | The timeout for writing events to the Data Prepper buffer. Any events that the S3 Source cannot write to the buffer in this time will be discarded. Default is 10 seconds. |
records_to_accumulate | No | Integer | The number of messages that accumulate before writing to the buffer. Default is 100. |
metadata_root_key | No | String | Base key for adding S3 metadata to each Event. The metadata includes the key and bucket for each S3 object. Defaults to s3/ . |
disable_bucket_ownership_validation | No | Boolean | If true , then the S3 Source will not attempt to validate that the bucket is owned by the expected account. The only expected account is the same account that owns the SQS queue. Defaults to false . |
sqs
The following are configure usage of Amazon SQS in the S3 Source plugin.
Option | Required | Type | Description |
---|---|---|---|
queue_url | Yes | String | The URL of the Amazon SQS queue from which messages are received. |
maximum_messages | No | Integer | The maximum number of messages to receive from the SQS queue in any single request. Default is 10 . |
visibility_timeout | No | Duration | The visibility timeout to apply to messages read from the SQS queue. This should be set to the amount of time that Data Prepper may take to read all the S3 objects in a batch. Default is 30s . |
wait_time | No | Duration | The time to wait for long polling on the SQS API. Default is 20s . |
poll_delay | No | Duration | A delay to place between reading and processing a batch of SQS messages and making a subsequent request. Default is 0s . |
aws
Option | Required | Type | Description |
---|---|---|---|
region | No | String | The AWS Region to use for credentials. Defaults to standard SDK behavior to determine the Region. |
sts_role_arn | No | String | The AWS Security Token Service (AWS STS) role to assume for requests to Amazon SQS and Amazon S3. Defaults to null, which will use the standard SDK behavior for credentials. |
file
Source for flat file input.
Option | Required | Type | Description |
---|---|---|---|
path | Yes | String | Path to the input file (e.g. logs/my-log.log ). |
format | No | String | Format of each line in the file. Valid options are json or plain . Default is plain . |
record_type | No | String | The record type to store. Valid options are string or event . Default is string . If you would like to use the file source for log analytics use cases like grok, set this option to event . |
pipeline
Source for reading from another pipeline.
Option | Required | Type | Description |
---|---|---|---|
name | Yes | String | Name of the pipeline to read from. |
stdin
Source for console input. Can be useful for testing. No options.
当前内容版权归 OpenSearch 或其关联方所有,如需对内容或内容相关联开源项目进行关注与资助,请访问 OpenSearch .