Beats Doris output plugin

Beats is a data collection agent that supports custom output plugins to write data into storage systems, with the Beats Doris output plugin being the one for outputting to Doris.

The Beats Doris output plugin supports Filebeat, Metricbeat, Packetbeat, Winlogbeat, Auditbeat, and Heartbeat.

By invoking the Doris Stream Load HTTP interface, the Beats Doris output plugin writes data into Doris in real-time, offering capabilities such as multi-threaded concurrency, failure retries, custom Stream Load formats and parameters, and output write speed.

To use the Beats Doris output plugin, there are three main steps:

  1. Download or compile the Beats binary program that includes the Doris output plugin.
  2. Configure the Beats output address and other parameters.
  3. Start Beats to write data into Doris in real-time.

Installation

Download from the Official Website

https://apache-doris-releases.oss-accelerate.aliyuncs.com/filebeat-doris-2.0.0

Compile from Source Code

Execute the following commands in the extension/beats/ directory:

  1. cd doris/extension/beats
  2. go build -o filebeat-doris filebeat/filebeat.go
  3. go build -o metricbeat-doris metricbeat/metricbeat.go
  4. go build -o winlogbeat-doris winlogbeat/winlogbeat.go
  5. go build -o packetbeat-doris packetbeat/packetbeat.go
  6. go build -o auditbeat-doris auditbeat/auditbeat.go
  7. go build -o heartbeat-doris heartbeat/heartbeat.go

Configuration

The configuration for the Beats Doris output plugin is as follows:

ConfigurationDescription
http_hostsStream Load HTTP address, formatted as a string array, can have one or more elements, each element is host:port. For example: [“http://fe1:8030“, “http://fe2:8030“]
userDoris username, this user needs to have import permissions for the corresponding Doris database and table
passwordDoris user’s password
databaseThe Doris database name to write into
tableThe Doris table name to write into
label_prefixDoris Stream Load Label prefix, the final generated Label is {labelprefix}{db}{table}{yyyymmddhhmmss}{uuid}, the default value is beats
headersDoris Stream Load headers parameter, syntax format is YAML map
codec_format_stringThe format string for outputting to Doris Stream Load, %{[a][b]} represents the a.b field in the input, refer to the usage examples in subsequent sections
bulk_max_sizeDoris Stream Load batch size, default is 100000
max_retriesNumber of retries for Doris Stream Load requests on failure, default is -1 for infinite retries to ensure data reliability
log_requestWhether to output Doris Stream Load request and response metadata in logs for troubleshooting, default is true
log_progress_intervalTime interval for outputting speed in logs, unit is seconds, default is 10, setting to 0 can disable this type of logging

Usage Example

TEXT Log Collection Example

This example demonstrates TEXT log collection using Doris FE logs as an example.

1. Data

FE log files are typically located at the fe/log/fe.log file under the Doris installation directory. They are typical Java program logs, including fields such as timestamp, log level, thread name, code location, and log content. Not only do they contain normal logs, but also exception logs with stacktraces, which are multiline. Log collection and storage need to combine the main log and stacktrace into a single log entry.

  1. 2024-07-08 21:18:01,432 INFO (Statistics Job Appender|61) [StatisticsJobAppender.runAfterCatalogReady():70] Stats table not available, skip
  2. 2024-07-08 21:18:53,710 WARN (STATS_FETCH-0|208) [StmtExecutor.executeInternalQuery():3332] Failed to run internal SQL: OriginStatement{originStmt='SELECT * FROM __internal_schema.column_statistics WHERE part_id is NULL ORDER BY update_time DESC LIMIT 500000', idx=0}
  3. org.apache.doris.common.UserException: errCode = 2, detailMessage = tablet 10031 has no queryable replicas. err: replica 10032's backend 10008 does not exist or not alive
  4. at org.apache.doris.planner.OlapScanNode.addScanRangeLocations(OlapScanNode.java:931) ~[doris-fe.jar:1.2-SNAPSHOT]
  5. at org.apache.doris.planner.OlapScanNode.computeTabletInfo(OlapScanNode.java:1197) ~[doris-fe.jar:1.2-SNAPSHOT]

2. Table Creation

The table structure includes fields such as the log’s creation time, collection time, hostname, log file path, log type, log level, thread name, code location, and log content.

  1. CREATE TABLE `doris_log` (
  2. `log_time` datetime NULL COMMENT 'log content time',
  3. `collect_time` datetime NULL COMMENT 'log agent collect time',
  4. `host` text NULL COMMENT 'hostname or ip',
  5. `path` text NULL COMMENT 'log file path',
  6. `type` text NULL COMMENT 'log type',
  7. `level` text NULL COMMENT 'log level',
  8. `thread` text NULL COMMENT 'log thread',
  9. `position` text NULL COMMENT 'log code position',
  10. `message` text NULL COMMENT 'log message',
  11. INDEX idx_host (`host`) USING INVERTED COMMENT '',
  12. INDEX idx_path (`path`) USING INVERTED COMMENT '',
  13. INDEX idx_type (`type`) USING INVERTED COMMENT '',
  14. INDEX idx_level (`level`) USING INVERTED COMMENT '',
  15. INDEX idx_thread (`thread`) USING INVERTED COMMENT '',
  16. INDEX idx_position (`position`) USING INVERTED COMMENT '',
  17. INDEX idx_message (`message`) USING INVERTED PROPERTIES("parser" = "unicode", "support_phrase" = "true") COMMENT ''
  18. ) ENGINE=OLAP
  19. DUPLICATE KEY(`log_time`)
  20. COMMENT 'OLAP'
  21. PARTITION BY RANGE(`log_time`) ()
  22. DISTRIBUTED BY RANDOM BUCKETS 10
  23. PROPERTIES (
  24. "replication_num" = "1",
  25. "dynamic_partition.enable" = "true",
  26. "dynamic_partition.time_unit" = "DAY",
  27. "dynamic_partition.start" = "-7",
  28. "dynamic_partition.end" = "1",
  29. "dynamic_partition.prefix" = "p",
  30. "dynamic_partition.buckets" = "10",
  31. "dynamic_partition.create_history_partition" = "true",
  32. "compaction_policy" = "time_series"
  33. );

3. Configuration

The filebeat log collection configuration file, such as filebeat_doris_log.yml, is in YAML format and mainly consists of four parts corresponding to the various stages of ETL:

  1. Input is responsible for reading the raw data.
  2. Processor is responsible for data transformation.
  3. queue.mem configures the internal buffer queue of filebeat.
  4. Output is responsible for sending the data to the output destination.
  1. # 1. input is responsible for reading raw data
  2. # type: log is a log input plugin that can be configured to read the path of the log file. It uses the multiline feature to concatenate lines that do not start with a timestamp to the end of the previous line, achieving the effect of merging stacktraces with the main log. The log input saves the log content in the message field, and there are also some metadata fields such as agent.host, log.file.path.
  3. filebeat.inputs:
  4. - type: log
  5. enabled: true
  6. paths:
  7. - /path/to/your/log
  8. # multiline can concatenate multi-line logs (e.g., Java stacktraces)
  9. multiline:
  10. type: pattern
  11. # Effect: Lines starting with yyyy-mm-dd HH:MM:SS are considered as a new log, others are concatenated to the previous log
  12. pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}'
  13. negate: true
  14. match: after
  15. skip_newline: true
  16. # 2. processors section is responsible for data transformation
  17. processors:
  18. # Use the js script plugin to replace \t in logs with spaces to avoid JSON parsing errors
  19. - script:
  20. lang: javascript
  21. source: >
  22. function process(event) {
  23. var msg = event.Get("message");
  24. msg = msg.replace(/\t/g, " ");
  25. event.Put("message", msg);
  26. }
  27. # Use the dissect plugin for simple log parsing
  28. - dissect:
  29. # Example log: 2024-06-08 18:26:25,481 INFO (report-thread|199) [ReportHandler.cpuReport():617] begin to handle
  30. tokenizer: "%{day} %{time} %{log_level} (%{thread}) [%{position}] %{content}"
  31. target_prefix: ""
  32. ignore_failure: true
  33. overwrite_keys: true
  34. # 3. internal buffer Queue total count, flush batch size, flush interval
  35. queue.mem:
  36. events: 1000000
  37. flush.min_events: 100000
  38. flush.timeout: 10s
  39. # 4. output section is responsible for data output
  40. # The doris output sends data to Doris using the Stream Load HTTP interface. The data format for Stream Load is specified as JSON through the headers parameter, and the codec_format_string parameter formats the output to Doris in a printf-like manner. For example, the following example formats a JSON based on filebeat internal fields such as agent.hostname, and fields produced by processors like dissect, such as day, using %{[a][b]} to reference them. Stream Load will automatically write the JSON fields into the corresponding fields of the Doris table.
  41. output.doris:
  42. fenodes: [ "http://fehost1:http_port", "http://fehost2:http_port", "http://fehost3:http_port" ]
  43. user: "your_username"
  44. password: "your_password"
  45. database: "your_db"
  46. table: "your_table"
  47. # Output string format
  48. ## %{[agent][hostname]} %{[log][file][path]} are filebeat自带的metadata
  49. ## Common filebeat metadata also includes采集时间戳 %{[@timestamp]}
  50. ## %{[day]} %{[time]} are fields obtained from the above dissect parsing
  51. codec_format_string: '{"ts": "%{[day]} %{[time]}", "host": "%{[agent][hostname]}", "path": "%{[log][file][path]}", "message": "%{[message]}" }'
  52. headers:
  53. format: "json"
  54. read_json_by_line: "true"
  55. load_to_single_tablet: "true"

4. Running filebeat

  1. ./filebeat-doris -f config/filebeat_doris_log.yml
  2. # When log_request is set to true, the log will output the request parameters and response results of each Stream Load.
  3. doris stream load response:
  4. {
  5. "TxnId": 45464,
  6. "Label": "logstash_log_db_doris_log_20240708_223532_539_6c20a0d1-dcab-4b8e-9bc0-76b46a929bd1",
  7. "Comment": "",
  8. "TwoPhaseCommit": "false",
  9. "Status": "Success",
  10. "Message": "OK",
  11. "NumberTotalRows": 452,
  12. "NumberLoadedRows": 452,
  13. "NumberFilteredRows": 0,
  14. "NumberUnselectedRows": 0,
  15. "LoadBytes": 277230,
  16. "LoadTimeMs": 1797,
  17. "BeginTxnTimeMs": 0,
  18. "StreamLoadPutTimeMs": 18,
  19. "ReadDataTimeMs": 9,
  20. "WriteDataTimeMs": 1758,
  21. "CommitAndPublishTimeMs": 18
  22. }
  23. # By default, speed information is logged every 10 seconds, including the amount of data since startup (in MB and ROWS), the total speed (in MB/s and R/S), and the speed in the last 10 seconds.
  24. total 11 MB 18978 ROWS, total speed 0 MB/s 632 R/s, last 10 seconds speed 1 MB/s 1897 R/s

JSON Log Collection Example

This example demonstrates JSON log collection using data from the GitHub events archive.

1. Data

The GitHub events archive contains archived data of GitHub user actions, formatted as JSON. It can be downloaded from here, for example, the data for January 1, 2024, at 3 PM.

  1. wget https://data.gharchive.org/2024-01-01-15.json.gz

Below is a sample of the data. Normally, each piece of data is on a single line, but for ease of display, it has been formatted here.

  1. {
  2. "id": "37066529221",
  3. "type": "PushEvent",
  4. "actor": {
  5. "id": 46139131,
  6. "login": "Bard89",
  7. "display_login": "Bard89",
  8. "gravatar_id": "",
  9. "url": "https://api.github.com/users/Bard89",
  10. "avatar_url": "https://avatars.githubusercontent.com/u/46139131?"
  11. },
  12. "repo": {
  13. "id": 780125623,
  14. "name": "Bard89/talk-to-me",
  15. "url": "https://api.github.com/repos/Bard89/talk-to-me"
  16. },
  17. "payload": {
  18. "repository_id": 780125623,
  19. "push_id": 17799451992,
  20. "size": 1,
  21. "distinct_size": 1,
  22. "ref": "refs/heads/add_mvcs",
  23. "head": "f03baa2de66f88f5f1754ce3fa30972667f87e81",
  24. "before": "85e6544ede4ae3f132fe2f5f1ce0ce35a3169d21"
  25. },
  26. "public": true,
  27. "created_at": "2024-04-01T23:00:00Z"
  28. }

2. Table Creation

  1. CREATE DATABASE log_db;
  2. USE log_db;
  3. CREATE TABLE github_events
  4. (
  5. `created_at` DATETIME,
  6. `id` BIGINT,
  7. `type` TEXT,
  8. `public` BOOLEAN,
  9. `actor` VARIANT,
  10. `repo` VARIANT,
  11. `payload` TEXT,
  12. INDEX `idx_id` (`id`) USING INVERTED,
  13. INDEX `idx_type` (`type`) USING INVERTED,
  14. INDEX `idx_actor` (`actor`) USING INVERTED,
  15. INDEX `idx_host` (`repo`) USING INVERTED,
  16. INDEX `idx_payload` (`payload`) USING INVERTED PROPERTIES("parser" = "unicode", "support_phrase" = "true")
  17. )
  18. ENGINE = OLAP
  19. DUPLICATE KEY(`created_at`)
  20. PARTITION BY RANGE(`created_at`) ()
  21. DISTRIBUTED BY RANDOM BUCKETS 10
  22. PROPERTIES (
  23. "replication_num" = "1",
  24. "compaction_policy" = "time_series",
  25. "enable_single_replica_compaction" = "true",
  26. "dynamic_partition.enable" = "true",
  27. "dynamic_partition.create_history_partition" = "true",
  28. "dynamic_partition.time_unit" = "DAY",
  29. "dynamic_partition.start" = "-30",
  30. "dynamic_partition.end" = "1",
  31. "dynamic_partition.prefix" = "p",
  32. "dynamic_partition.buckets" = "10",
  33. "dynamic_partition.replication_num" = "1"
  34. );

3. Filebeat Configuration

This configuration file differs from the previous TEXT log collection in the following aspects:

  1. Processors are not used because no additional processing or transformation is needed.
  2. The codec_format_string in the output is simple, directly outputting the entire message, which is the raw content.
  1. # input
  2. filebeat.inputs:
  3. - type: log
  4. enabled: true
  5. paths:
  6. - /path/to/your/log
  7. # queue and batch
  8. queue.mem:
  9. events: 1000000
  10. flush.min_events: 100000
  11. flush.timeout: 10s
  12. # output
  13. output.doris:
  14. fenodes: [ "http://fehost1:http_port", "http://fehost2:http_port", "http://fehost3:http_port" ]
  15. user: "your_username"
  16. password: "your_password"
  17. database: "your_db"
  18. table: "your_table"
  19. # output string format
  20. ## Directly outputting the raw message of each line from the original file. Since headers specify format: "json", Stream Load will automatically parse the JSON fields and write them into the corresponding fields of the Doris table.
  21. codec_format_string: '%{[message]}'
  22. headers:
  23. format: "json"
  24. read_json_by_line: "true"
  25. load_to_single_tablet: "true"

4. Running Filebeat

  1. ./filebeat-doris -f config/filebeat_github_events.yml