Google BigQuery

Hudi tables can be queried from Google Cloud BigQuery as external tables. As of now, the Hudi-BigQuery integration only works for hive-style partitioned Copy-On-Write and Read-Optimized Merge-On-Read tables.

Sync Modes

Manifest File

As of version 0.14.0, the BigQuerySyncTool supports syncing table to BigQuery using manifests. On the first run, the tool will create a manifest file representing the current base files in the table and a table in BigQuery based on the provided configurations. The tool produces a new manifest file on each subsequent run and will update the schema of the table in BigQuery if the schema changes in your Hudi table.

Benefits of using the new manifest approach:

  1. Only the files in the manifest can be scanned leading to less cost and better performance for your queries
  2. The schema is now synced from the Hudi commit metadata allowing for proper schema evolution
  3. Lists no longer have unnecessary nesting when querying in BigQuery as list inference is enabled by default
  4. Partition column no longer needs to be dropped from the files due to new schema handling improvements

To enable this feature, set hoodie.gcp.bigquery.sync.use_bq_manifest_file to true.

View Over Files (Legacy)

This is the current default behavior to preserve compatibility as users upgrade to 0.14.0 and beyond.
After run, the sync tool will create 2 tables and 1 view in the target dataset in BigQuery. The tables and the view share the same name prefix, which is taken from the Hudi table name. Query the view for the same results as querying the Copy-on-Write Hudi table.
NOTE: The view can scan all of the parquet files under your table’s base path so it is recommended to upgrade to the manifest based approach for improved cost and performance.

Configurations

Hudi uses org.apache.hudi.gcp.bigquery.BigQuerySyncTool to sync tables. It works with HoodieStreamer via setting sync tool class. A few BigQuery-specific configurations are required.

ConfigNotes
hoodie.gcp.bigquery.sync.project_idThe target Google Cloud project
hoodie.gcp.bigquery.sync.dataset_nameBigQuery dataset name; create before running the sync tool
hoodie.gcp.bigquery.sync.dataset_locationRegion info of the dataset; same as the GCS bucket that stores the Hudi table
hoodie.gcp.bigquery.sync.source_uriA wildcard path pattern pointing to the first level partition; partition key can be specified or auto-inferred. Only required for partitioned tables
hoodie.gcp.bigquery.sync.source_uri_prefixThe common prefix of the source_uri, usually it’s the path to the Hudi table, trailing slash does not matter.
hoodie.gcp.bigquery.sync.base_pathThe usual basepath config for Hudi table.
hoodie.gcp.bigquery.sync.use_bq_manifest_fileSet to true to enable the manifest based sync
hoodie.gcp.bigquery.sync.require_partition_filterIntroduced in Hudi version 0.14.1, this configuration accepts a BOOLEAN value, with the default being false. When enabled (set to true), you must create a partition filter (a WHERE clause) for all queries, targeting the partitioning column of a partitioned table. Queries lacking such a filter will result in an error.

Refer to org.apache.hudi.gcp.bigquery.BigQuerySyncConfig for the complete configuration list.

Partition Handling

In addition to the BigQuery-specific configs, you will need to use hive style partitioning for partition pruning in BigQuery. On top of that, the value in partition path will be the value returned for that field in your query. For example if you partition on a time-millis field, time, with an output format of time=yyyy-MM-dd, the query will return time values with day level granularity instead of the original milliseconds so keep this in mind while setting up your tables.

  1. hoodie.datasource.write.hive_style_partitioning = 'true'

For the view based sync you must also specify the following configurations:

  1. hoodie.datasource.write.drop.partition.columns = 'true'
  2. hoodie.partition.metafile.use.base.format = 'true'

Example

Below shows an example for running BigQuerySyncTool with HoodieStreamer.

  1. spark-submit --master yarn \
  2. --packages com.google.cloud:google-cloud-bigquery:2.10.4 \
  3. --jars /opt/hudi-gcp-bundle-0.13.0.jar \
  4. --class org.apache.hudi.utilities.streamer.HoodieStreamer \
  5. /opt/hudi-utilities-bundle_2.12-0.13.0.jar \
  6. --target-base-path gs://my-hoodie-table/path \
  7. --target-table mytable \
  8. --table-type COPY_ON_WRITE \
  9. --base-file-format PARQUET \
  10. # ... other Hudi Streamer options
  11. --enable-sync \
  12. --sync-tool-classes org.apache.hudi.gcp.bigquery.BigQuerySyncTool \
  13. --hoodie-conf hoodie.streamer.source.dfs.root=gs://my-source-data/path \
  14. --hoodie-conf hoodie.gcp.bigquery.sync.project_id=hudi-bq \
  15. --hoodie-conf hoodie.gcp.bigquery.sync.dataset_name=rxusandbox \
  16. --hoodie-conf hoodie.gcp.bigquery.sync.dataset_location=asia-southeast1 \
  17. --hoodie-conf hoodie.gcp.bigquery.sync.table_name=mytable \
  18. --hoodie-conf hoodie.gcp.bigquery.sync.base_path=gs://rxusandbox/testcases/stocks/data/target/${NOW} \
  19. --hoodie-conf hoodie.gcp.bigquery.sync.partition_fields=year,month,day \
  20. --hoodie-conf hoodie.gcp.bigquery.sync.source_uri=gs://my-hoodie-table/path/year=* \
  21. --hoodie-conf hoodie.gcp.bigquery.sync.source_uri_prefix=gs://my-hoodie-table/path/ \
  22. --hoodie-conf hoodie.gcp.bigquery.sync.use_file_listing_from_metadata=true \
  23. --hoodie-conf hoodie.gcp.bigquery.sync.assume_date_partitioning=false \
  24. --hoodie-conf hoodie.datasource.hive_sync.mode=jdbc \
  25. --hoodie-conf hoodie.datasource.hive_sync.jdbcurl=jdbc:hive2://localhost:10000 \
  26. --hoodie-conf hoodie.datasource.hive_sync.skip_ro_suffix=true \
  27. --hoodie-conf hoodie.datasource.hive_sync.ignore_exceptions=false \
  28. --hoodie-conf hoodie.datasource.hive_sync.database=mydataset \
  29. --hoodie-conf hoodie.datasource.hive_sync.table=mytable \
  30. --hoodie-conf hoodie.datasource.write.recordkey.field=mykey \
  31. --hoodie-conf hoodie.datasource.write.partitionpath.field=year,month,day \
  32. --hoodie-conf hoodie.datasource.write.precombine.field=ts \
  33. --hoodie-conf hoodie.datasource.write.keygenerator.type=COMPLEX \
  34. --hoodie-conf hoodie.datasource.write.hive_style_partitioning=true \
  35. --hoodie-conf hoodie.datasource.write.drop.partition.columns=true \
  36. --hoodie-conf hoodie.partition.metafile.use.base.format=true \
  37. --hoodie-conf hoodie.metadata.enable=true \