BigQuery
BigQuery output plugin is and experimental plugin that allows you to stream records into Google Cloud BigQuery service. The implementation does not support the following, which would be expected in a full production version:
- Application Default Credentials.
- Data deduplication using
insertId
. - Template tables using
templateSuffix
.
Google Cloud Configuration
Fluent Bit streams data into an existing BigQuery table using a service account that you specify. Therefore, before using the BigQuery output plugin, you must create a service account, create a BigQuery dataset and table, authorize the service account to write to the table, and provide the service account credentials to Fluent Bit.
Creating a Service Account
To stream data into BigQuery, the first step is to create a Google Cloud service account for Fluent Bit:
Creating a BigQuery Dataset and Table
Fluent Bit does not create datasets or tables for your data, so you must create these ahead of time. You must also grant the service account WRITER
permission on the dataset:
Within the dataset you will need to create a table for the data to reside in. You can follow the following instructions for creating your table. Pay close attention to the schema. It must match the schema of your output JSON. Unfortunately, since BigQuery does not allow dots in field names, you will need to use a filter to change the fields for many of the standard inputs (e.g, mem or cpu).
Retrieving Service Account Credentials
Fluent Bit BigQuery output plugin uses a JSON credentials file for authentication credentials. Download the credentials file by following these instructions:
Configurations Parameters
Key | Description | default |
---|---|---|
google_service_credentials | Absolute path to a Google Cloud credentials JSON file | Value of the environment variable $GOOGLE_SERVICE_CREDENTIALS |
project_id | The project id containing the BigQuery dataset to stream into. | The value of the project_id in the credentials file |
dataset_id | The dataset id of the BigQuery dataset to write into. This dataset must exist in your project. | |
table_id | The table id of the BigQuery table to write into. This table must exist in the specified dataset and the schema must match the output. |
Configuration File
If you are using a Google Cloud Credentials File, the following configuration is enough to get you started:
[INPUT]
Name dummy
Tag dummy
[OUTPUT]
Name bigquery
Match *
dataset_id my_dataset
table_id dummy_table