Docker Demo

A Demo using Docker containers

Let’s use a real world example to see how Hudi works end to end. For this purpose, a self contained data infrastructure is brought up in a local Docker cluster within your computer. It requires the Hudi repo to have been cloned locally.

The steps have been tested on a Mac laptop

Prerequisites

  • Clone the Hudi repository to your local machine.

  • Docker Setup : For Mac, Please follow the steps as defined in Install Docker Desktop on Mac. For running Spark-SQL queries, please ensure atleast 6 GB and 4 CPUs are allocated to Docker (See Docker -> Preferences -> Advanced). Otherwise, spark-SQL queries could be killed because of memory issues.

  • kcat : A command-line utility to publish/consume from kafka topics. Use brew install kcat to install kcat.

  • /etc/hosts : The demo references many services running in container by the hostname. Add the following settings to /etc/hosts

    1. 127.0.0.1 adhoc-1
    2. 127.0.0.1 adhoc-2
    3. 127.0.0.1 namenode
    4. 127.0.0.1 datanode1
    5. 127.0.0.1 hiveserver
    6. 127.0.0.1 hivemetastore
    7. 127.0.0.1 kafkabroker
    8. 127.0.0.1 sparkmaster
    9. 127.0.0.1 zookeeper
  • Java : Java SE Development Kit 8.

  • Maven : A build automation tool for Java projects.

  • jq : A lightweight and flexible command-line JSON processor. Use brew install jq to install jq.

Also, this has not been tested on some environments like Docker on Windows.

Setting up Docker Cluster

Build Hudi

The first step is to build Hudi. Note This step builds Hudi on default supported scala version - 2.11.

NOTE: Make sure you’ve cloned the Hudi repository first.

  1. cd <HUDI_WORKSPACE>
  2. mvn clean package -Pintegration-tests -DskipTests

Bringing up Demo Cluster

The next step is to run the Docker compose script and setup configs for bringing up the cluster. These files are in the Hudi repository which you should already have locally on your machine from the previous steps.

This should pull the Docker images from Docker hub and setup the Docker cluster.

  • Default
  • Mac AArch64
  1. cd docker
  2. ./setup_demo.sh
  3. ....
  4. ....
  5. ....
  6. [+] Running 10/13
  7. Container zookeeper Removed 8.6s
  8. Container datanode1 Removed 18.3s
  9. Container trino-worker-1 Removed 50.7s
  10. Container spark-worker-1 Removed 16.7s
  11. Container adhoc-2 Removed 16.9s
  12. Container graphite Removed 16.9s
  13. Container kafkabroker Removed 14.1s
  14. Container adhoc-1 Removed 14.1s
  15. Container presto-worker-1 Removed 11.9s
  16. Container presto-coordinator-1 Removed 34.6s
  17. .......
  18. ......
  19. [+] Running 17/17
  20. adhoc-1 Pulled 2.9s
  21. graphite Pulled 2.8s
  22. spark-worker-1 Pulled 3.0s
  23. kafka Pulled 2.9s
  24. datanode1 Pulled 2.9s
  25. hivemetastore Pulled 2.9s
  26. hiveserver Pulled 3.0s
  27. hive-metastore-postgresql Pulled 2.8s
  28. presto-coordinator-1 Pulled 2.9s
  29. namenode Pulled 2.9s
  30. trino-worker-1 Pulled 2.9s
  31. sparkmaster Pulled 2.9s
  32. presto-worker-1 Pulled 2.9s
  33. zookeeper Pulled 2.8s
  34. adhoc-2 Pulled 2.9s
  35. historyserver Pulled 2.9s
  36. trino-coordinator-1 Pulled 2.9s
  37. [+] Running 17/17
  38. Container zookeeper Started 41.0s
  39. Container kafkabroker Started 41.7s
  40. Container graphite Started 41.5s
  41. Container hive-metastore-postgresql Running 0.0s
  42. Container namenode Running 0.0s
  43. Container hivemetastore Running 0.0s
  44. Container trino-coordinator-1 Runni... 0.0s
  45. Container presto-coordinator-1 Star... 42.1s
  46. Container historyserver Started 41.0s
  47. Container datanode1 Started 49.9s
  48. Container hiveserver Running 0.0s
  49. Container trino-worker-1 Started 42.1s
  50. Container sparkmaster Started 41.9s
  51. Container spark-worker-1 Started 50.2s
  52. Container adhoc-2 Started 38.5s
  53. Container adhoc-1 Started 38.5s
  54. Container presto-worker-1 Started 38.4s
  55. Copying spark default config and setting up configs
  56. Copying spark default config and setting up configs
  57. $ docker ps

Docker Demo - 图1Please note the following for Mac AArch64 users

  • The demo must be built and run using the master branch. We currently plan to include support starting with the 0.13.0 release.
  • Presto and Trino are not currently supported in the demo.
  1. cd docker
  2. ./setup_demo.sh --mac-aarch64
  3. .......
  4. ......
  5. [+] Running 12/12
  6. adhoc-1 Pulled 2.9s
  7. spark-worker-1 Pulled 3.0s
  8. kafka Pulled 2.9s
  9. datanode1 Pulled 2.9s
  10. hivemetastore Pulled 2.9s
  11. hiveserver Pulled 3.0s
  12. hive-metastore-postgresql Pulled 2.8s
  13. namenode Pulled 2.9s
  14. sparkmaster Pulled 2.9s
  15. zookeeper Pulled 2.8s
  16. adhoc-2 Pulled 2.9s
  17. historyserver Pulled 2.9s
  18. [+] Running 12/12
  19. Container zookeeper Started 41.0s
  20. Container kafkabroker Started 41.7s
  21. Container hive-metastore-postgresql Running 0.0s
  22. Container namenode Running 0.0s
  23. Container hivemetastore Running 0.0s
  24. Container historyserver Started 41.0s
  25. Container datanode1 Started 49.9s
  26. Container hiveserver Running 0.0s
  27. Container sparkmaster Started 41.9s
  28. Container spark-worker-1 Started 50.2s
  29. Container adhoc-2 Started 38.5s
  30. Container adhoc-1 Started 38.5s
  31. Copying spark default config and setting up configs
  32. Copying spark default config and setting up configs
  33. $ docker ps

At this point, the Docker cluster will be up and running. The demo cluster brings up the following services

  • HDFS Services (NameNode, DataNode)
  • Spark Master and Worker
  • Hive Services (Metastore, HiveServer2 along with PostgresDB)
  • Kafka Broker and a Zookeeper Node (Kafka will be used as upstream source for the demo)
  • Containers for Presto setup (Presto coordinator and worker)
  • Containers for Trino setup (Trino coordinator and worker)
  • Adhoc containers to run Hudi/Hive CLI commands

Demo

Stock Tracker data will be used to showcase different Hudi query types and the effects of Compaction.

Take a look at the directory docker/demo/data. There are 2 batches of stock data - each at 1 minute granularity. The first batch contains stocker tracker data for some stock symbols during the first hour of trading window (9:30 a.m to 10:30 a.m). The second batch contains tracker data for next 30 mins (10:30 - 11 a.m). Hudi will be used to ingest these batches to a table which will contain the latest stock tracker data at hour level granularity. The batches are windowed intentionally so that the second batch contains updates to some of the rows in the first batch.

Step 1 : Publish the first batch to Kafka

Upload the first batch to Kafka topic ‘stock ticks’

cat docker/demo/data/batch_1.json | kcat -b kafkabroker -t stock_ticks -P

To check if the new topic shows up, use

  1. kcat -b kafkabroker -L -J | jq .
  2. {
  3. "originating_broker": {
  4. "id": 1001,
  5. "name": "kafkabroker:9092/1001"
  6. },
  7. "query": {
  8. "topic": "*"
  9. },
  10. "brokers": [
  11. {
  12. "id": 1001,
  13. "name": "kafkabroker:9092"
  14. }
  15. ],
  16. "topics": [
  17. {
  18. "topic": "stock_ticks",
  19. "partitions": [
  20. {
  21. "partition": 0,
  22. "leader": 1001,
  23. "replicas": [
  24. {
  25. "id": 1001
  26. }
  27. ],
  28. "isrs": [
  29. {
  30. "id": 1001
  31. }
  32. ]
  33. }
  34. ]
  35. }
  36. ]
  37. }

Step 2: Incrementally ingest data from Kafka topic

Hudi comes with a tool named Hudi Streamer. This tool can connect to variety of data sources (including Kafka) to pull changes and apply to Hudi table using upsert/insert primitives. Here, we will use the tool to download json data from kafka topic and ingest to both COW and MOR tables we initialized in the previous step. This tool automatically initializes the tables in the file-system if they do not exist yet.

  1. docker exec -it adhoc-2 /bin/bash
  2. # Run the following spark-submit command to execute the Hudi Streamer and ingest to stock_ticks_cow table in HDFS
  3. spark-submit \
  4. --class org.apache.hudi.utilities.streamer.HoodieStreamer $HUDI_UTILITIES_BUNDLE \
  5. --table-type COPY_ON_WRITE \
  6. --source-class org.apache.hudi.utilities.sources.JsonKafkaSource \
  7. --source-ordering-field ts \
  8. --target-base-path /user/hive/warehouse/stock_ticks_cow \
  9. --target-table stock_ticks_cow --props /var/demo/config/kafka-source.properties \
  10. --schemaprovider-class org.apache.hudi.utilities.schema.FilebasedSchemaProvider
  11. # Run the following spark-submit command to execute the Hudi Streamer and ingest to stock_ticks_mor table in HDFS
  12. spark-submit \
  13. --class org.apache.hudi.utilities.streamer.HoodieStreamer $HUDI_UTILITIES_BUNDLE \
  14. --table-type MERGE_ON_READ \
  15. --source-class org.apache.hudi.utilities.sources.JsonKafkaSource \
  16. --source-ordering-field ts \
  17. --target-base-path /user/hive/warehouse/stock_ticks_mor \
  18. --target-table stock_ticks_mor \
  19. --props /var/demo/config/kafka-source.properties \
  20. --schemaprovider-class org.apache.hudi.utilities.schema.FilebasedSchemaProvider \
  21. --disable-compaction
  22. # As part of the setup (Look at setup_demo.sh), the configs needed for Hudi Streamer is uploaded to HDFS. The configs
  23. # contain mostly Kafa connectivity settings, the avro-schema to be used for ingesting along with key and partitioning fields.
  24. exit

You can use HDFS web-browser to look at the tables http://namenode:50070/explorer.html#/user/hive/warehouse/stock_ticks_cow.

You can explore the new partition folder created in the table along with a “commit” / “deltacommit” file under .hoodie which signals a successful commit.

There will be a similar setup when you browse the MOR table http://namenode:50070/explorer.html#/user/hive/warehouse/stock_ticks_mor

Step 3: Sync with Hive

At this step, the tables are available in HDFS. We need to sync with Hive to create new Hive tables and add partitions inorder to run Hive queries against those tables.

  1. docker exec -it adhoc-2 /bin/bash
  2. # This command takes in HiveServer URL and COW Hudi table location in HDFS and sync the HDFS state to Hive
  3. /var/hoodie/ws/hudi-sync/hudi-hive-sync/run_sync_tool.sh \
  4. --jdbc-url jdbc:hive2://hiveserver:10000 \
  5. --user hive \
  6. --pass hive \
  7. --partitioned-by dt \
  8. --base-path /user/hive/warehouse/stock_ticks_cow \
  9. --database default \
  10. --table stock_ticks_cow \
  11. --partition-value-extractor org.apache.hudi.hive.SlashEncodedDayPartitionValueExtractor
  12. .....
  13. 2020-01-25 19:51:28,953 INFO [main] hive.HiveSyncTool (HiveSyncTool.java:syncHoodieTable(129)) - Sync complete for stock_ticks_cow
  14. .....
  15. # Now run hive-sync for the second data-set in HDFS using Merge-On-Read (MOR table type)
  16. /var/hoodie/ws/hudi-sync/hudi-hive-sync/run_sync_tool.sh \
  17. --jdbc-url jdbc:hive2://hiveserver:10000 \
  18. --user hive \
  19. --pass hive \
  20. --partitioned-by dt \
  21. --base-path /user/hive/warehouse/stock_ticks_mor \
  22. --database default \
  23. --table stock_ticks_mor \
  24. --partition-value-extractor org.apache.hudi.hive.SlashEncodedDayPartitionValueExtractor
  25. ...
  26. 2020-01-25 19:51:51,066 INFO [main] hive.HiveSyncTool (HiveSyncTool.java:syncHoodieTable(129)) - Sync complete for stock_ticks_mor_ro
  27. ...
  28. 2020-01-25 19:51:51,569 INFO [main] hive.HiveSyncTool (HiveSyncTool.java:syncHoodieTable(129)) - Sync complete for stock_ticks_mor_rt
  29. ....
  30. exit

After executing the above command, you will notice

  1. A hive table named stock_ticks_cow created which supports Snapshot and Incremental queries on Copy On Write table.
  2. Two new tables stock_ticks_mor_rt and stock_ticks_mor_ro created for the Merge On Read table. The former supports Snapshot and Incremental queries (providing near-real time data) while the later supports ReadOptimized queries.

Step 4 (a): Run Hive Queries

Run a hive query to find the latest timestamp ingested for stock symbol ‘GOOG’. You will notice that both snapshot (for both COW and MOR _rt table) and read-optimized queries (for MOR _ro table) give the same value “10:29 a.m” as Hudi create a parquet file for the first batch of data.

  1. docker exec -it adhoc-2 /bin/bash
  2. beeline -u jdbc:hive2://hiveserver:10000 \
  3. --hiveconf hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat \
  4. --hiveconf hive.stats.autogather=false
  5. # List Tables
  6. 0: jdbc:hive2://hiveserver:10000> show tables;
  7. +---------------------+--+
  8. | tab_name |
  9. +---------------------+--+
  10. | stock_ticks_cow |
  11. | stock_ticks_mor_ro |
  12. | stock_ticks_mor_rt |
  13. +---------------------+--+
  14. 3 rows selected (1.199 seconds)
  15. 0: jdbc:hive2://hiveserver:10000>
  16. # Look at partitions that were added
  17. 0: jdbc:hive2://hiveserver:10000> show partitions stock_ticks_mor_rt;
  18. +----------------+--+
  19. | partition |
  20. +----------------+--+
  21. | dt=2018-08-31 |
  22. +----------------+--+
  23. 1 row selected (0.24 seconds)
  24. # COPY-ON-WRITE Queries:
  25. =========================
  26. 0: jdbc:hive2://hiveserver:10000> select symbol, max(ts) from stock_ticks_cow group by symbol HAVING symbol = 'GOOG';
  27. +---------+----------------------+--+
  28. | symbol | _c1 |
  29. +---------+----------------------+--+
  30. | GOOG | 2018-08-31 10:29:00 |
  31. +---------+----------------------+--+
  32. Now, run a projection query:
  33. 0: jdbc:hive2://hiveserver:10000> select `_hoodie_commit_time`, symbol, ts, volume, open, close from stock_ticks_cow where symbol = 'GOOG';
  34. +----------------------+---------+----------------------+---------+------------+-----------+--+
  35. | _hoodie_commit_time | symbol | ts | volume | open | close |
  36. +----------------------+---------+----------------------+---------+------------+-----------+--+
  37. | 20180924221953 | GOOG | 2018-08-31 09:59:00 | 6330 | 1230.5 | 1230.02 |
  38. | 20180924221953 | GOOG | 2018-08-31 10:29:00 | 3391 | 1230.1899 | 1230.085 |
  39. +----------------------+---------+----------------------+---------+------------+-----------+--+
  40. # Merge-On-Read Queries:
  41. ==========================
  42. Lets run similar queries against M-O-R table. Lets look at both
  43. ReadOptimized and Snapshot(realtime data) queries supported by M-O-R table
  44. # Run ReadOptimized Query. Notice that the latest timestamp is 10:29
  45. 0: jdbc:hive2://hiveserver:10000> select symbol, max(ts) from stock_ticks_mor_ro group by symbol HAVING symbol = 'GOOG';
  46. WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
  47. +---------+----------------------+--+
  48. | symbol | _c1 |
  49. +---------+----------------------+--+
  50. | GOOG | 2018-08-31 10:29:00 |
  51. +---------+----------------------+--+
  52. 1 row selected (6.326 seconds)
  53. # Run Snapshot Query. Notice that the latest timestamp is again 10:29
  54. 0: jdbc:hive2://hiveserver:10000> select symbol, max(ts) from stock_ticks_mor_rt group by symbol HAVING symbol = 'GOOG';
  55. WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
  56. +---------+----------------------+--+
  57. | symbol | _c1 |
  58. +---------+----------------------+--+
  59. | GOOG | 2018-08-31 10:29:00 |
  60. +---------+----------------------+--+
  61. 1 row selected (1.606 seconds)
  62. # Run Read Optimized and Snapshot project queries
  63. 0: jdbc:hive2://hiveserver:10000> select `_hoodie_commit_time`, symbol, ts, volume, open, close from stock_ticks_mor_ro where symbol = 'GOOG';
  64. +----------------------+---------+----------------------+---------+------------+-----------+--+
  65. | _hoodie_commit_time | symbol | ts | volume | open | close |
  66. +----------------------+---------+----------------------+---------+------------+-----------+--+
  67. | 20180924222155 | GOOG | 2018-08-31 09:59:00 | 6330 | 1230.5 | 1230.02 |
  68. | 20180924222155 | GOOG | 2018-08-31 10:29:00 | 3391 | 1230.1899 | 1230.085 |
  69. +----------------------+---------+----------------------+---------+------------+-----------+--+
  70. 0: jdbc:hive2://hiveserver:10000> select `_hoodie_commit_time`, symbol, ts, volume, open, close from stock_ticks_mor_rt where symbol = 'GOOG';
  71. +----------------------+---------+----------------------+---------+------------+-----------+--+
  72. | _hoodie_commit_time | symbol | ts | volume | open | close |
  73. +----------------------+---------+----------------------+---------+------------+-----------+--+
  74. | 20180924222155 | GOOG | 2018-08-31 09:59:00 | 6330 | 1230.5 | 1230.02 |
  75. | 20180924222155 | GOOG | 2018-08-31 10:29:00 | 3391 | 1230.1899 | 1230.085 |
  76. +----------------------+---------+----------------------+---------+------------+-----------+--+
  77. exit

Step 4 (b): Run Spark-SQL Queries

Hudi support Spark as query processor just like Hive. Here are the same hive queries running in spark-sql

  1. docker exec -it adhoc-1 /bin/bash
  2. $SPARK_INSTALL/bin/spark-shell \
  3. --jars $HUDI_SPARK_BUNDLE \
  4. --master local[2] \
  5. --driver-class-path $HADOOP_CONF_DIR \
  6. --conf spark.sql.hive.convertMetastoreParquet=false \
  7. --deploy-mode client \
  8. --driver-memory 1G \
  9. --executor-memory 3G \
  10. --num-executors 1
  11. ...
  12. Welcome to
  13. ____ __
  14. / __/__ ___ _____/ /__
  15. _\ \/ _ \/ _ `/ __/ '_/
  16. /___/ .__/\_,_/_/ /_/\_\ version 2.4.4
  17. /_/
  18. Using Scala version 2.11.12 (OpenJDK 64-Bit Server VM, Java 1.8.0_212)
  19. Type in expressions to have them evaluated.
  20. Type :help for more information.
  21. scala> spark.sql("show tables").show(100, false)
  22. +--------+------------------+-----------+
  23. |database|tableName |isTemporary|
  24. +--------+------------------+-----------+
  25. |default |stock_ticks_cow |false |
  26. |default |stock_ticks_mor_ro|false |
  27. |default |stock_ticks_mor_rt|false |
  28. +--------+------------------+-----------+
  29. # Copy-On-Write Table
  30. ## Run max timestamp query against COW table
  31. scala> spark.sql("select symbol, max(ts) from stock_ticks_cow group by symbol HAVING symbol = 'GOOG'").show(100, false)
  32. [Stage 0:> (0 + 1) / 1]SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
  33. SLF4J: Defaulting to no-operation (NOP) logger implementation
  34. SLF4J: See http://www.slf4j.org/codes#StaticLoggerBinder for further details.
  35. +------+-------------------+
  36. |symbol|max(ts) |
  37. +------+-------------------+
  38. |GOOG |2018-08-31 10:29:00|
  39. +------+-------------------+
  40. ## Projection Query
  41. scala> spark.sql("select `_hoodie_commit_time`, symbol, ts, volume, open, close from stock_ticks_cow where symbol = 'GOOG'").show(100, false)
  42. +-------------------+------+-------------------+------+---------+--------+
  43. |_hoodie_commit_time|symbol|ts |volume|open |close |
  44. +-------------------+------+-------------------+------+---------+--------+
  45. |20180924221953 |GOOG |2018-08-31 09:59:00|6330 |1230.5 |1230.02 |
  46. |20180924221953 |GOOG |2018-08-31 10:29:00|3391 |1230.1899|1230.085|
  47. +-------------------+------+-------------------+------+---------+--------+
  48. # Merge-On-Read Queries:
  49. ==========================
  50. Lets run similar queries against M-O-R table. Lets look at both
  51. ReadOptimized and Snapshot queries supported by M-O-R table
  52. # Run ReadOptimized Query. Notice that the latest timestamp is 10:29
  53. scala> spark.sql("select symbol, max(ts) from stock_ticks_mor_ro group by symbol HAVING symbol = 'GOOG'").show(100, false)
  54. +------+-------------------+
  55. |symbol|max(ts) |
  56. +------+-------------------+
  57. |GOOG |2018-08-31 10:29:00|
  58. +------+-------------------+
  59. # Run Snapshot Query. Notice that the latest timestamp is again 10:29
  60. scala> spark.sql("select symbol, max(ts) from stock_ticks_mor_rt group by symbol HAVING symbol = 'GOOG'").show(100, false)
  61. +------+-------------------+
  62. |symbol|max(ts) |
  63. +------+-------------------+
  64. |GOOG |2018-08-31 10:29:00|
  65. +------+-------------------+
  66. # Run Read Optimized and Snapshot project queries
  67. scala> spark.sql("select `_hoodie_commit_time`, symbol, ts, volume, open, close from stock_ticks_mor_ro where symbol = 'GOOG'").show(100, false)
  68. +-------------------+------+-------------------+------+---------+--------+
  69. |_hoodie_commit_time|symbol|ts |volume|open |close |
  70. +-------------------+------+-------------------+------+---------+--------+
  71. |20180924222155 |GOOG |2018-08-31 09:59:00|6330 |1230.5 |1230.02 |
  72. |20180924222155 |GOOG |2018-08-31 10:29:00|3391 |1230.1899|1230.085|
  73. +-------------------+------+-------------------+------+---------+--------+
  74. scala> spark.sql("select `_hoodie_commit_time`, symbol, ts, volume, open, close from stock_ticks_mor_rt where symbol = 'GOOG'").show(100, false)
  75. +-------------------+------+-------------------+------+---------+--------+
  76. |_hoodie_commit_time|symbol|ts |volume|open |close |
  77. +-------------------+------+-------------------+------+---------+--------+
  78. |20180924222155 |GOOG |2018-08-31 09:59:00|6330 |1230.5 |1230.02 |
  79. |20180924222155 |GOOG |2018-08-31 10:29:00|3391 |1230.1899|1230.085|
  80. +-------------------+------+-------------------+------+---------+--------+

Step 4 (c): Run Presto Queries

Here are the Presto queries for similar Hive and Spark queries.

Docker Demo - 图2note

  • Currently, Presto does not support snapshot or incremental queries on Hudi tables.
  • This section of the demo is not supported for Mac AArch64 users at this time.
  1. docker exec -it presto-worker-1 presto --server presto-coordinator-1:8090
  2. presto> show catalogs;
  3. Catalog
  4. -----------
  5. hive
  6. jmx
  7. localfile
  8. system
  9. (4 rows)
  10. Query 20190817_134851_00000_j8rcz, FINISHED, 1 node
  11. Splits: 19 total, 19 done (100.00%)
  12. 0:04 [0 rows, 0B] [0 rows/s, 0B/s]
  13. presto> use hive.default;
  14. USE
  15. presto:default> show tables;
  16. Table
  17. --------------------
  18. stock_ticks_cow
  19. stock_ticks_mor_ro
  20. stock_ticks_mor_rt
  21. (3 rows)
  22. Query 20190822_181000_00001_segyw, FINISHED, 2 nodes
  23. Splits: 19 total, 19 done (100.00%)
  24. 0:05 [3 rows, 99B] [0 rows/s, 18B/s]
  25. # COPY-ON-WRITE Queries:
  26. =========================
  27. presto:default> select symbol, max(ts) from stock_ticks_cow group by symbol HAVING symbol = 'GOOG';
  28. symbol | _col1
  29. --------+---------------------
  30. GOOG | 2018-08-31 10:29:00
  31. (1 row)
  32. Query 20190822_181011_00002_segyw, FINISHED, 1 node
  33. Splits: 49 total, 49 done (100.00%)
  34. 0:12 [197 rows, 613B] [16 rows/s, 50B/s]
  35. presto:default> select "_hoodie_commit_time", symbol, ts, volume, open, close from stock_ticks_cow where symbol = 'GOOG';
  36. _hoodie_commit_time | symbol | ts | volume | open | close
  37. ---------------------+--------+---------------------+--------+-----------+----------
  38. 20190822180221 | GOOG | 2018-08-31 09:59:00 | 6330 | 1230.5 | 1230.02
  39. 20190822180221 | GOOG | 2018-08-31 10:29:00 | 3391 | 1230.1899 | 1230.085
  40. (2 rows)
  41. Query 20190822_181141_00003_segyw, FINISHED, 1 node
  42. Splits: 17 total, 17 done (100.00%)
  43. 0:02 [197 rows, 613B] [109 rows/s, 341B/s]
  44. # Merge-On-Read Queries:
  45. ==========================
  46. Lets run similar queries against M-O-R table.
  47. # Run ReadOptimized Query. Notice that the latest timestamp is 10:29
  48. presto:default> select symbol, max(ts) from stock_ticks_mor_ro group by symbol HAVING symbol = 'GOOG';
  49. symbol | _col1
  50. --------+---------------------
  51. GOOG | 2018-08-31 10:29:00
  52. (1 row)
  53. Query 20190822_181158_00004_segyw, FINISHED, 1 node
  54. Splits: 49 total, 49 done (100.00%)
  55. 0:02 [197 rows, 613B] [110 rows/s, 343B/s]
  56. presto:default> select "_hoodie_commit_time", symbol, ts, volume, open, close from stock_ticks_mor_ro where symbol = 'GOOG';
  57. _hoodie_commit_time | symbol | ts | volume | open | close
  58. ---------------------+--------+---------------------+--------+-----------+----------
  59. 20190822180250 | GOOG | 2018-08-31 09:59:00 | 6330 | 1230.5 | 1230.02
  60. 20190822180250 | GOOG | 2018-08-31 10:29:00 | 3391 | 1230.1899 | 1230.085
  61. (2 rows)
  62. Query 20190822_181256_00006_segyw, FINISHED, 1 node
  63. Splits: 17 total, 17 done (100.00%)
  64. 0:02 [197 rows, 613B] [92 rows/s, 286B/s]
  65. presto:default> exit

Step 4 (d): Run Trino Queries

Here are the similar queries with Trino.

Docker Demo - 图3note

  • Currently, Trino does not support snapshot or incremental queries on Hudi tables.
  • This section of the demo is not supported for Mac AArch64 users at this time.
  1. docker exec -it adhoc-2 trino --server trino-coordinator-1:8091
  2. trino> show catalogs;
  3. Catalog
  4. ---------
  5. hive
  6. system
  7. (2 rows)
  8. Query 20220112_055038_00000_sac73, FINISHED, 1 node
  9. Splits: 19 total, 19 done (100.00%)
  10. 3.74 [0 rows, 0B] [0 rows/s, 0B/s]
  11. trino> use hive.default;
  12. USE
  13. trino:default> show tables;
  14. Table
  15. --------------------
  16. stock_ticks_cow
  17. stock_ticks_mor_ro
  18. stock_ticks_mor_rt
  19. (3 rows)
  20. Query 20220112_055050_00003_sac73, FINISHED, 2 nodes
  21. Splits: 19 total, 19 done (100.00%)
  22. 1.84 [3 rows, 102B] [1 rows/s, 55B/s]
  23. # COPY-ON-WRITE Queries:
  24. =========================
  25. trino:default> select symbol, max(ts) from stock_ticks_cow group by symbol HAVING symbol = 'GOOG';
  26. symbol | _col1
  27. --------+---------------------
  28. GOOG | 2018-08-31 10:29:00
  29. (1 row)
  30. Query 20220112_055101_00005_sac73, FINISHED, 1 node
  31. Splits: 49 total, 49 done (100.00%)
  32. 4.08 [197 rows, 442KB] [48 rows/s, 108KB/s]
  33. trino:default> select "_hoodie_commit_time", symbol, ts, volume, open, close from stock_ticks_cow where symbol = 'GOOG';
  34. _hoodie_commit_time | symbol | ts | volume | open | close
  35. ---------------------+--------+---------------------+--------+-----------+----------
  36. 20220112054822108 | GOOG | 2018-08-31 09:59:00 | 6330 | 1230.5 | 1230.02
  37. 20220112054822108 | GOOG | 2018-08-31 10:29:00 | 3391 | 1230.1899 | 1230.085
  38. (2 rows)
  39. Query 20220112_055113_00006_sac73, FINISHED, 1 node
  40. Splits: 17 total, 17 done (100.00%)
  41. 0.40 [197 rows, 450KB] [487 rows/s, 1.09MB/s]
  42. # Merge-On-Read Queries:
  43. ==========================
  44. Lets run similar queries against MOR table.
  45. # Run ReadOptimized Query. Notice that the latest timestamp is 10:29
  46. trino:default> select symbol, max(ts) from stock_ticks_mor_ro group by symbol HAVING symbol = 'GOOG';
  47. symbol | _col1
  48. --------+---------------------
  49. GOOG | 2018-08-31 10:29:00
  50. (1 row)
  51. Query 20220112_055125_00007_sac73, FINISHED, 1 node
  52. Splits: 49 total, 49 done (100.00%)
  53. 0.50 [197 rows, 442KB] [395 rows/s, 888KB/s]
  54. trino:default> select "_hoodie_commit_time", symbol, ts, volume, open, close from stock_ticks_mor_ro where symbol = 'GOOG';
  55. _hoodie_commit_time | symbol | ts | volume | open | close
  56. ---------------------+--------+---------------------+--------+-----------+----------
  57. 20220112054844841 | GOOG | 2018-08-31 09:59:00 | 6330 | 1230.5 | 1230.02
  58. 20220112054844841 | GOOG | 2018-08-31 10:29:00 | 3391 | 1230.1899 | 1230.085
  59. (2 rows)
  60. Query 20220112_055136_00008_sac73, FINISHED, 1 node
  61. Splits: 17 total, 17 done (100.00%)
  62. 0.49 [197 rows, 450KB] [404 rows/s, 924KB/s]
  63. trino:default> exit

Step 5: Upload second batch to Kafka and run Hudi Streamer to ingest

Upload the second batch of data and ingest this batch using Hudi Streamer. As this batch does not bring in any new partitions, there is no need to run hive-sync

  1. cat docker/demo/data/batch_2.json | kcat -b kafkabroker -t stock_ticks -P
  2. # Within Docker container, run the ingestion command
  3. docker exec -it adhoc-2 /bin/bash
  4. # Run the following spark-submit command to execute the Hudi Streamer and ingest to stock_ticks_cow table in HDFS
  5. spark-submit \
  6. --class org.apache.hudi.utilities.streamer.HoodieStreamer $HUDI_UTILITIES_BUNDLE \
  7. --table-type COPY_ON_WRITE \
  8. --source-class org.apache.hudi.utilities.sources.JsonKafkaSource \
  9. --source-ordering-field ts \
  10. --target-base-path /user/hive/warehouse/stock_ticks_cow \
  11. --target-table stock_ticks_cow \
  12. --props /var/demo/config/kafka-source.properties \
  13. --schemaprovider-class org.apache.hudi.utilities.schema.FilebasedSchemaProvider
  14. # Run the following spark-submit command to execute the Hudi Streamer and ingest to stock_ticks_mor table in HDFS
  15. spark-submit \
  16. --class org.apache.hudi.utilities.streamer.HoodieStreamer $HUDI_UTILITIES_BUNDLE \
  17. --table-type MERGE_ON_READ \
  18. --source-class org.apache.hudi.utilities.sources.JsonKafkaSource \
  19. --source-ordering-field ts \
  20. --target-base-path /user/hive/warehouse/stock_ticks_mor \
  21. --target-table stock_ticks_mor \
  22. --props /var/demo/config/kafka-source.properties \
  23. --schemaprovider-class org.apache.hudi.utilities.schema.FilebasedSchemaProvider \
  24. --disable-compaction
  25. exit

With Copy-On-Write table, the second ingestion by Hudi Streamer resulted in a new version of Parquet file getting created. See http://namenode:50070/explorer.html#/user/hive/warehouse/stock_ticks_cow/2018/08/31

With Merge-On-Read table, the second ingestion merely appended the batch to an unmerged delta (log) file. Take a look at the HDFS filesystem to get an idea: http://namenode:50070/explorer.html#/user/hive/warehouse/stock_ticks_mor/2018/08/31

Step 6 (a): Run Hive Queries

With Copy-On-Write table, the Snapshot query immediately sees the changes as part of second batch once the batch got committed as each ingestion creates newer versions of parquet files.

With Merge-On-Read table, the second ingestion merely appended the batch to an unmerged delta (log) file. This is the time, when ReadOptimized and Snapshot queries will provide different results. ReadOptimized query will still return “10:29 am” as it will only read from the Parquet file. Snapshot query will do on-the-fly merge and return latest committed data which is “10:59 a.m”.

  1. docker exec -it adhoc-2 /bin/bash
  2. beeline -u jdbc:hive2://hiveserver:10000 \
  3. --hiveconf hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat \
  4. --hiveconf hive.stats.autogather=false
  5. # Copy On Write Table:
  6. 0: jdbc:hive2://hiveserver:10000> select symbol, max(ts) from stock_ticks_cow group by symbol HAVING symbol = 'GOOG';
  7. WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
  8. +---------+----------------------+--+
  9. | symbol | _c1 |
  10. +---------+----------------------+--+
  11. | GOOG | 2018-08-31 10:59:00 |
  12. +---------+----------------------+--+
  13. 1 row selected (1.932 seconds)
  14. 0: jdbc:hive2://hiveserver:10000> select `_hoodie_commit_time`, symbol, ts, volume, open, close from stock_ticks_cow where symbol = 'GOOG';
  15. +----------------------+---------+----------------------+---------+------------+-----------+--+
  16. | _hoodie_commit_time | symbol | ts | volume | open | close |
  17. +----------------------+---------+----------------------+---------+------------+-----------+--+
  18. | 20180924221953 | GOOG | 2018-08-31 09:59:00 | 6330 | 1230.5 | 1230.02 |
  19. | 20180924224524 | GOOG | 2018-08-31 10:59:00 | 9021 | 1227.1993 | 1227.215 |
  20. +----------------------+---------+----------------------+---------+------------+-----------+--+
  21. As you can notice, the above queries now reflect the changes that came as part of ingesting second batch.
  22. # Merge On Read Table:
  23. # Read Optimized Query
  24. 0: jdbc:hive2://hiveserver:10000> select symbol, max(ts) from stock_ticks_mor_ro group by symbol HAVING symbol = 'GOOG';
  25. WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
  26. +---------+----------------------+--+
  27. | symbol | _c1 |
  28. +---------+----------------------+--+
  29. | GOOG | 2018-08-31 10:29:00 |
  30. +---------+----------------------+--+
  31. 1 row selected (1.6 seconds)
  32. 0: jdbc:hive2://hiveserver:10000> select `_hoodie_commit_time`, symbol, ts, volume, open, close from stock_ticks_mor_ro where symbol = 'GOOG';
  33. +----------------------+---------+----------------------+---------+------------+-----------+--+
  34. | _hoodie_commit_time | symbol | ts | volume | open | close |
  35. +----------------------+---------+----------------------+---------+------------+-----------+--+
  36. | 20180924222155 | GOOG | 2018-08-31 09:59:00 | 6330 | 1230.5 | 1230.02 |
  37. | 20180924222155 | GOOG | 2018-08-31 10:29:00 | 3391 | 1230.1899 | 1230.085 |
  38. +----------------------+---------+----------------------+---------+------------+-----------+--+
  39. # Snapshot Query
  40. 0: jdbc:hive2://hiveserver:10000> select symbol, max(ts) from stock_ticks_mor_rt group by symbol HAVING symbol = 'GOOG';
  41. WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
  42. +---------+----------------------+--+
  43. | symbol | _c1 |
  44. +---------+----------------------+--+
  45. | GOOG | 2018-08-31 10:59:00 |
  46. +---------+----------------------+--+
  47. 0: jdbc:hive2://hiveserver:10000> select `_hoodie_commit_time`, symbol, ts, volume, open, close from stock_ticks_mor_rt where symbol = 'GOOG';
  48. +----------------------+---------+----------------------+---------+------------+-----------+--+
  49. | _hoodie_commit_time | symbol | ts | volume | open | close |
  50. +----------------------+---------+----------------------+---------+------------+-----------+--+
  51. | 20180924222155 | GOOG | 2018-08-31 09:59:00 | 6330 | 1230.5 | 1230.02 |
  52. | 20180924224537 | GOOG | 2018-08-31 10:59:00 | 9021 | 1227.1993 | 1227.215 |
  53. +----------------------+---------+----------------------+---------+------------+-----------+--+
  54. exit

Step 6 (b): Run Spark SQL Queries

Running the same queries in Spark-SQL:

  1. docker exec -it adhoc-1 /bin/bash
  2. $SPARK_INSTALL/bin/spark-shell \
  3. --jars $HUDI_SPARK_BUNDLE \
  4. --driver-class-path $HADOOP_CONF_DIR \
  5. --conf spark.sql.hive.convertMetastoreParquet=false \
  6. --deploy-mode client \
  7. --driver-memory 1G \
  8. --master local[2] \
  9. --executor-memory 3G \
  10. --num-executors 1
  11. # Copy On Write Table:
  12. scala> spark.sql("select symbol, max(ts) from stock_ticks_cow group by symbol HAVING symbol = 'GOOG'").show(100, false)
  13. +------+-------------------+
  14. |symbol|max(ts) |
  15. +------+-------------------+
  16. |GOOG |2018-08-31 10:59:00|
  17. +------+-------------------+
  18. scala> spark.sql("select `_hoodie_commit_time`, symbol, ts, volume, open, close from stock_ticks_cow where symbol = 'GOOG'").show(100, false)
  19. +----------------------+---------+----------------------+---------+------------+-----------+--+
  20. | _hoodie_commit_time | symbol | ts | volume | open | close |
  21. +----------------------+---------+----------------------+---------+------------+-----------+--+
  22. | 20180924221953 | GOOG | 2018-08-31 09:59:00 | 6330 | 1230.5 | 1230.02 |
  23. | 20180924224524 | GOOG | 2018-08-31 10:59:00 | 9021 | 1227.1993 | 1227.215 |
  24. +----------------------+---------+----------------------+---------+------------+-----------+--+
  25. As you can notice, the above queries now reflect the changes that came as part of ingesting second batch.
  26. # Merge On Read Table:
  27. # Read Optimized Query
  28. scala> spark.sql("select symbol, max(ts) from stock_ticks_mor_ro group by symbol HAVING symbol = 'GOOG'").show(100, false)
  29. +---------+----------------------+
  30. | symbol | _c1 |
  31. +---------+----------------------+
  32. | GOOG | 2018-08-31 10:29:00 |
  33. +---------+----------------------+
  34. 1 row selected (1.6 seconds)
  35. scala> spark.sql("select `_hoodie_commit_time`, symbol, ts, volume, open, close from stock_ticks_mor_ro where symbol = 'GOOG'").show(100, false)
  36. +----------------------+---------+----------------------+---------+------------+-----------+
  37. | _hoodie_commit_time | symbol | ts | volume | open | close |
  38. +----------------------+---------+----------------------+---------+------------+-----------+
  39. | 20180924222155 | GOOG | 2018-08-31 09:59:00 | 6330 | 1230.5 | 1230.02 |
  40. | 20180924222155 | GOOG | 2018-08-31 10:29:00 | 3391 | 1230.1899 | 1230.085 |
  41. +----------------------+---------+----------------------+---------+------------+-----------+
  42. # Snapshot Query
  43. scala> spark.sql("select symbol, max(ts) from stock_ticks_mor_rt group by symbol HAVING symbol = 'GOOG'").show(100, false)
  44. +---------+----------------------+
  45. | symbol | _c1 |
  46. +---------+----------------------+
  47. | GOOG | 2018-08-31 10:59:00 |
  48. +---------+----------------------+
  49. scala> spark.sql("select `_hoodie_commit_time`, symbol, ts, volume, open, close from stock_ticks_mor_rt where symbol = 'GOOG'").show(100, false)
  50. +----------------------+---------+----------------------+---------+------------+-----------+
  51. | _hoodie_commit_time | symbol | ts | volume | open | close |
  52. +----------------------+---------+----------------------+---------+------------+-----------+
  53. | 20180924222155 | GOOG | 2018-08-31 09:59:00 | 6330 | 1230.5 | 1230.02 |
  54. | 20180924224537 | GOOG | 2018-08-31 10:59:00 | 9021 | 1227.1993 | 1227.215 |
  55. +----------------------+---------+----------------------+---------+------------+-----------+
  56. exit

Step 6 (c): Run Presto Queries

Running the same queries on Presto for ReadOptimized queries.

Docker Demo - 图4note

This section of the demo is not supported for Mac AArch64 users at this time.

  1. docker exec -it presto-worker-1 presto --server presto-coordinator-1:8090
  2. presto> use hive.default;
  3. USE
  4. # Copy On Write Table:
  5. presto:default>select symbol, max(ts) from stock_ticks_cow group by symbol HAVING symbol = 'GOOG';
  6. symbol | _col1
  7. --------+---------------------
  8. GOOG | 2018-08-31 10:59:00
  9. (1 row)
  10. Query 20190822_181530_00007_segyw, FINISHED, 1 node
  11. Splits: 49 total, 49 done (100.00%)
  12. 0:02 [197 rows, 613B] [125 rows/s, 389B/s]
  13. presto:default>select "_hoodie_commit_time", symbol, ts, volume, open, close from stock_ticks_cow where symbol = 'GOOG';
  14. _hoodie_commit_time | symbol | ts | volume | open | close
  15. ---------------------+--------+---------------------+--------+-----------+----------
  16. 20190822180221 | GOOG | 2018-08-31 09:59:00 | 6330 | 1230.5 | 1230.02
  17. 20190822181433 | GOOG | 2018-08-31 10:59:00 | 9021 | 1227.1993 | 1227.215
  18. (2 rows)
  19. Query 20190822_181545_00008_segyw, FINISHED, 1 node
  20. Splits: 17 total, 17 done (100.00%)
  21. 0:02 [197 rows, 613B] [106 rows/s, 332B/s]
  22. As you can notice, the above queries now reflect the changes that came as part of ingesting second batch.
  23. # Merge On Read Table:
  24. # Read Optimized Query
  25. presto:default> select symbol, max(ts) from stock_ticks_mor_ro group by symbol HAVING symbol = 'GOOG';
  26. symbol | _col1
  27. --------+---------------------
  28. GOOG | 2018-08-31 10:29:00
  29. (1 row)
  30. Query 20190822_181602_00009_segyw, FINISHED, 1 node
  31. Splits: 49 total, 49 done (100.00%)
  32. 0:01 [197 rows, 613B] [139 rows/s, 435B/s]
  33. presto:default>select "_hoodie_commit_time", symbol, ts, volume, open, close from stock_ticks_mor_ro where symbol = 'GOOG';
  34. _hoodie_commit_time | symbol | ts | volume | open | close
  35. ---------------------+--------+---------------------+--------+-----------+----------
  36. 20190822180250 | GOOG | 2018-08-31 09:59:00 | 6330 | 1230.5 | 1230.02
  37. 20190822180250 | GOOG | 2018-08-31 10:29:00 | 3391 | 1230.1899 | 1230.085
  38. (2 rows)
  39. Query 20190822_181615_00010_segyw, FINISHED, 1 node
  40. Splits: 17 total, 17 done (100.00%)
  41. 0:01 [197 rows, 613B] [154 rows/s, 480B/s]
  42. presto:default> exit

Step 6 (d): Run Trino Queries

Running the same queries on Trino for Read-Optimized queries.

Docker Demo - 图5note

This section of the demo is not supported for Mac AArch64 users at this time.

  1. docker exec -it adhoc-2 trino --server trino-coordinator-1:8091
  2. trino> use hive.default;
  3. USE
  4. # Copy On Write Table:
  5. trino:default> select symbol, max(ts) from stock_ticks_cow group by symbol HAVING symbol = 'GOOG';
  6. symbol | _col1
  7. --------+---------------------
  8. GOOG | 2018-08-31 10:59:00
  9. (1 row)
  10. Query 20220112_055443_00012_sac73, FINISHED, 1 node
  11. Splits: 49 total, 49 done (100.00%)
  12. 0.63 [197 rows, 442KB] [310 rows/s, 697KB/s]
  13. trino:default> select "_hoodie_commit_time", symbol, ts, volume, open, close from stock_ticks_cow where symbol = 'GOOG';
  14. _hoodie_commit_time | symbol | ts | volume | open | close
  15. ---------------------+--------+---------------------+--------+-----------+----------
  16. 20220112054822108 | GOOG | 2018-08-31 09:59:00 | 6330 | 1230.5 | 1230.02
  17. 20220112055352654 | GOOG | 2018-08-31 10:59:00 | 9021 | 1227.1993 | 1227.215
  18. (2 rows)
  19. Query 20220112_055450_00013_sac73, FINISHED, 1 node
  20. Splits: 17 total, 17 done (100.00%)
  21. 0.65 [197 rows, 450KB] [303 rows/s, 692KB/s]
  22. As you can notice, the above queries now reflect the changes that came as part of ingesting second batch.
  23. # Merge On Read Table:
  24. # Read Optimized Query
  25. trino:default> select symbol, max(ts) from stock_ticks_mor_ro group by symbol HAVING symbol = 'GOOG';
  26. symbol | _col1
  27. --------+---------------------
  28. GOOG | 2018-08-31 10:29:00
  29. (1 row)
  30. Query 20220112_055500_00014_sac73, FINISHED, 1 node
  31. Splits: 49 total, 49 done (100.00%)
  32. 0.59 [197 rows, 442KB] [336 rows/s, 756KB/s]
  33. trino:default> select "_hoodie_commit_time", symbol, ts, volume, open, close from stock_ticks_mor_ro where symbol = 'GOOG';
  34. _hoodie_commit_time | symbol | ts | volume | open | close
  35. ---------------------+--------+---------------------+--------+-----------+----------
  36. 20220112054844841 | GOOG | 2018-08-31 09:59:00 | 6330 | 1230.5 | 1230.02
  37. 20220112054844841 | GOOG | 2018-08-31 10:29:00 | 3391 | 1230.1899 | 1230.085
  38. (2 rows)
  39. Query 20220112_055506_00015_sac73, FINISHED, 1 node
  40. Splits: 17 total, 17 done (100.00%)
  41. 0.35 [197 rows, 450KB] [556 rows/s, 1.24MB/s]
  42. trino:default> exit

Step 7 (a): Incremental Query for COPY-ON-WRITE Table

With 2 batches of data ingested, lets showcase the support for incremental queries in Hudi Copy-On-Write tables

Lets take the same projection query example

  1. docker exec -it adhoc-2 /bin/bash
  2. beeline -u jdbc:hive2://hiveserver:10000 \
  3. --hiveconf hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat \
  4. --hiveconf hive.stats.autogather=false
  5. 0: jdbc:hive2://hiveserver:10000> select `_hoodie_commit_time`, symbol, ts, volume, open, close from stock_ticks_cow where symbol = 'GOOG';
  6. +----------------------+---------+----------------------+---------+------------+-----------+--+
  7. | _hoodie_commit_time | symbol | ts | volume | open | close |
  8. +----------------------+---------+----------------------+---------+------------+-----------+--+
  9. | 20180924064621 | GOOG | 2018-08-31 09:59:00 | 6330 | 1230.5 | 1230.02 |
  10. | 20180924065039 | GOOG | 2018-08-31 10:59:00 | 9021 | 1227.1993 | 1227.215 |
  11. +----------------------+---------+----------------------+---------+------------+-----------+--+

As you notice from the above queries, there are 2 commits - 20180924064621 and 20180924065039 in timeline order. When you follow the steps, you will be getting different timestamps for commits. Substitute them in place of the above timestamps.

To show the effects of incremental-query, let us assume that a reader has already seen the changes as part of ingesting first batch. Now, for the reader to see effect of the second batch, he/she has to keep the start timestamp to the commit time of the first batch (20180924064621) and run incremental query

Hudi incremental mode provides efficient scanning for incremental queries by filtering out files that do not have any candidate rows using hudi-managed metadata.

  1. docker exec -it adhoc-2 /bin/bash
  2. beeline -u jdbc:hive2://hiveserver:10000 \
  3. --hiveconf hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat \
  4. --hiveconf hive.stats.autogather=false
  5. 0: jdbc:hive2://hiveserver:10000> set hoodie.stock_ticks_cow.consume.mode=INCREMENTAL;
  6. No rows affected (0.009 seconds)
  7. 0: jdbc:hive2://hiveserver:10000> set hoodie.stock_ticks_cow.consume.max.commits=3;
  8. No rows affected (0.009 seconds)
  9. 0: jdbc:hive2://hiveserver:10000> set hoodie.stock_ticks_cow.consume.start.timestamp=20180924064621;

With the above setting, file-ids that do not have any updates from the commit 20180924065039 is filtered out without scanning. Here is the incremental query :

  1. 0: jdbc:hive2://hiveserver:10000>
  2. 0: jdbc:hive2://hiveserver:10000> select `_hoodie_commit_time`, symbol, ts, volume, open, close from stock_ticks_cow where symbol = 'GOOG' and `_hoodie_commit_time` > '20180924064621';
  3. +----------------------+---------+----------------------+---------+------------+-----------+--+
  4. | _hoodie_commit_time | symbol | ts | volume | open | close |
  5. +----------------------+---------+----------------------+---------+------------+-----------+--+
  6. | 20180924065039 | GOOG | 2018-08-31 10:59:00 | 9021 | 1227.1993 | 1227.215 |
  7. +----------------------+---------+----------------------+---------+------------+-----------+--+
  8. 1 row selected (0.83 seconds)
  9. 0: jdbc:hive2://hiveserver:10000>

Step 7 (b): Incremental Query with Spark SQL:

  1. docker exec -it adhoc-1 /bin/bash
  2. $SPARK_INSTALL/bin/spark-shell \
  3. --jars $HUDI_SPARK_BUNDLE \
  4. --driver-class-path $HADOOP_CONF_DIR \
  5. --conf spark.sql.hive.convertMetastoreParquet=false \
  6. --deploy-mode client \
  7. --driver-memory 1G \
  8. --master local[2] \
  9. --executor-memory 3G \
  10. --num-executors 1
  11. Welcome to
  12. ____ __
  13. / __/__ ___ _____/ /__
  14. _\ \/ _ \/ _ `/ __/ '_/
  15. /___/ .__/\_,_/_/ /_/\_\ version 2.4.4
  16. /_/
  17. Using Scala version 2.11.12 (OpenJDK 64-Bit Server VM, Java 1.8.0_212)
  18. Type in expressions to have them evaluated.
  19. Type :help for more information.
  20. scala> import org.apache.hudi.DataSourceReadOptions
  21. import org.apache.hudi.DataSourceReadOptions
  22. # In the below query, 20180925045257 is the first commit's timestamp
  23. scala> val hoodieIncViewDF = spark.read.format("org.apache.hudi").option(DataSourceReadOptions.QUERY_TYPE_OPT_KEY, DataSourceReadOptions.QUERY_TYPE_INCREMENTAL_OPT_VAL).option(DataSourceReadOptions.BEGIN_INSTANTTIME_OPT_KEY, "20180924064621").load("/user/hive/warehouse/stock_ticks_cow")
  24. SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
  25. SLF4J: Defaulting to no-operation (NOP) logger implementation
  26. SLF4J: See http://www.slf4j.org/codes#StaticLoggerBinder for further details.
  27. hoodieIncViewDF: org.apache.spark.sql.DataFrame = [_hoodie_commit_time: string, _hoodie_commit_seqno: string ... 15 more fields]
  28. scala> hoodieIncViewDF.registerTempTable("stock_ticks_cow_incr_tmp1")
  29. warning: there was one deprecation warning; re-run with -deprecation for details
  30. scala> spark.sql("select `_hoodie_commit_time`, symbol, ts, volume, open, close from stock_ticks_cow_incr_tmp1 where symbol = 'GOOG'").show(100, false);
  31. +----------------------+---------+----------------------+---------+------------+-----------+
  32. | _hoodie_commit_time | symbol | ts | volume | open | close |
  33. +----------------------+---------+----------------------+---------+------------+-----------+
  34. | 20180924065039 | GOOG | 2018-08-31 10:59:00 | 9021 | 1227.1993 | 1227.215 |
  35. +----------------------+---------+----------------------+---------+------------+-----------+

Step 8: Schedule and Run Compaction for Merge-On-Read table

Lets schedule and run a compaction to create a new version of columnar file so that read-optimized readers will see fresher data. Again, You can use Hudi CLI to manually schedule and run compaction

  1. docker exec -it adhoc-1 /bin/bash
  2. root@adhoc-1:/opt# /var/hoodie/ws/hudi-cli/hudi-cli.sh
  3. ...
  4. Table command getting loaded
  5. HoodieSplashScreen loaded
  6. ===================================================================
  7. * ___ ___ *
  8. * /\__\ ___ /\ \ ___ *
  9. * / / / /\__\ / \ \ /\ \ *
  10. * / /__/ / / / / /\ \ \ \ \ \ *
  11. * / \ \ ___ / / / / / \ \__\ / \__\ *
  12. * / /\ \ /\__\ / /__/ ___ / /__/ \ |__| / /\/__/ *
  13. * \/ \ \/ / / \ \ \ /\__\ \ \ \ / / / /\/ / / *
  14. * \ / / \ \ / / / \ \ / / / \ /__/ *
  15. * / / / \ \/ / / \ \/ / / \ \__\ *
  16. * / / / \ / / \ / / \/__/ *
  17. * \/__/ \/__/ \/__/ Apache Hudi CLI *
  18. * *
  19. ===================================================================
  20. Welcome to Apache Hudi CLI. Please type help if you are looking for help.
  21. hudi->connect --path /user/hive/warehouse/stock_ticks_mor
  22. 18/09/24 06:59:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  23. 18/09/24 06:59:35 INFO table.HoodieTableMetaClient: Loading HoodieTableMetaClient from /user/hive/warehouse/stock_ticks_mor
  24. 18/09/24 06:59:35 INFO util.FSUtils: Hadoop Configuration: fs.defaultFS: [hdfs://namenode:8020], Config:[Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml], FileSystem: [DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_-1261652683_11, ugi=root (auth:SIMPLE)]]]
  25. 18/09/24 06:59:35 INFO table.HoodieTableConfig: Loading table properties from /user/hive/warehouse/stock_ticks_mor/.hoodie/hoodie.properties
  26. 18/09/24 06:59:36 INFO table.HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1) from /user/hive/warehouse/stock_ticks_mor
  27. Metadata for table stock_ticks_mor loaded
  28. hoodie:stock_ticks_mor->compactions show all
  29. 20/02/10 03:41:32 INFO timeline.HoodieActiveTimeline: Loaded instants [[20200210015059__clean__COMPLETED], [20200210015059__deltacommit__COMPLETED], [20200210022758__clean__COMPLETED], [20200210022758__deltacommit__COMPLETED], [==>20200210023843__compaction__REQUESTED]]
  30. ___________________________________________________________________
  31. | Compaction Instant Time| State | Total FileIds to be Compacted|
  32. |==================================================================|
  33. # Schedule a compaction. This will use Spark Launcher to schedule compaction
  34. hoodie:stock_ticks_mor->compaction schedule --hoodieConfigs hoodie.compact.inline.max.delta.commits=1
  35. ....
  36. Compaction successfully completed for 20180924070031
  37. # Now refresh and check again. You will see that there is a new compaction requested
  38. hoodie:stock_ticks_mor->refresh
  39. 18/09/24 07:01:16 INFO table.HoodieTableMetaClient: Loading HoodieTableMetaClient from /user/hive/warehouse/stock_ticks_mor
  40. 18/09/24 07:01:16 INFO table.HoodieTableConfig: Loading table properties from /user/hive/warehouse/stock_ticks_mor/.hoodie/hoodie.properties
  41. 18/09/24 07:01:16 INFO table.HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1) from /user/hive/warehouse/stock_ticks_mor
  42. Metadata for table stock_ticks_mor loaded
  43. hoodie:stock_ticks_mor->compactions show all
  44. 18/09/24 06:34:12 INFO timeline.HoodieActiveTimeline: Loaded instants [[20180924041125__clean__COMPLETED], [20180924041125__deltacommit__COMPLETED], [20180924042735__clean__COMPLETED], [20180924042735__deltacommit__COMPLETED], [==>20180924063245__compaction__REQUESTED]]
  45. ___________________________________________________________________
  46. | Compaction Instant Time| State | Total FileIds to be Compacted|
  47. |==================================================================|
  48. | 20180924070031 | REQUESTED| 1 |
  49. # Execute the compaction. The compaction instant value passed below must be the one displayed in the above "compactions show all" query
  50. hoodie:stock_ticks_mor->compaction run --compactionInstant 20180924070031 --parallelism 2 --sparkMemory 1G --schemaFilePath /var/demo/config/schema.avsc --retry 1
  51. ....
  52. Compaction successfully completed for 20180924070031
  53. ## Now check if compaction is completed
  54. hoodie:stock_ticks_mor->refresh
  55. 18/09/24 07:03:00 INFO table.HoodieTableMetaClient: Loading HoodieTableMetaClient from /user/hive/warehouse/stock_ticks_mor
  56. 18/09/24 07:03:00 INFO table.HoodieTableConfig: Loading table properties from /user/hive/warehouse/stock_ticks_mor/.hoodie/hoodie.properties
  57. 18/09/24 07:03:00 INFO table.HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1) from /user/hive/warehouse/stock_ticks_mor
  58. Metadata for table stock_ticks_mor loaded
  59. hoodie:stock_ticks_mor->compactions show all
  60. 18/09/24 07:03:15 INFO timeline.HoodieActiveTimeline: Loaded instants [[20180924064636__clean__COMPLETED], [20180924064636__deltacommit__COMPLETED], [20180924065057__clean__COMPLETED], [20180924065057__deltacommit__COMPLETED], [20180924070031__commit__COMPLETED]]
  61. ___________________________________________________________________
  62. | Compaction Instant Time| State | Total FileIds to be Compacted|
  63. |==================================================================|
  64. | 20180924070031 | COMPLETED| 1 |

Step 9: Run Hive Queries including incremental queries

You will see that both ReadOptimized and Snapshot queries will show the latest committed data. Lets also run the incremental query for MOR table. From looking at the below query output, it will be clear that the fist commit time for the MOR table is 20180924064636 and the second commit time is 20180924070031

  1. docker exec -it adhoc-2 /bin/bash
  2. beeline -u jdbc:hive2://hiveserver:10000 \
  3. --hiveconf hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat \
  4. --hiveconf hive.stats.autogather=false
  5. # Read Optimized Query
  6. 0: jdbc:hive2://hiveserver:10000> select symbol, max(ts) from stock_ticks_mor_ro group by symbol HAVING symbol = 'GOOG';
  7. WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
  8. +---------+----------------------+--+
  9. | symbol | _c1 |
  10. +---------+----------------------+--+
  11. | GOOG | 2018-08-31 10:59:00 |
  12. +---------+----------------------+--+
  13. 1 row selected (1.6 seconds)
  14. 0: jdbc:hive2://hiveserver:10000> select `_hoodie_commit_time`, symbol, ts, volume, open, close from stock_ticks_mor_ro where symbol = 'GOOG';
  15. +----------------------+---------+----------------------+---------+------------+-----------+--+
  16. | _hoodie_commit_time | symbol | ts | volume | open | close |
  17. +----------------------+---------+----------------------+---------+------------+-----------+--+
  18. | 20180924064636 | GOOG | 2018-08-31 09:59:00 | 6330 | 1230.5 | 1230.02 |
  19. | 20180924070031 | GOOG | 2018-08-31 10:59:00 | 9021 | 1227.1993 | 1227.215 |
  20. +----------------------+---------+----------------------+---------+------------+-----------+--+
  21. # Snapshot Query
  22. 0: jdbc:hive2://hiveserver:10000> select symbol, max(ts) from stock_ticks_mor_rt group by symbol HAVING symbol = 'GOOG';
  23. WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
  24. +---------+----------------------+--+
  25. | symbol | _c1 |
  26. +---------+----------------------+--+
  27. | GOOG | 2018-08-31 10:59:00 |
  28. +---------+----------------------+--+
  29. 0: jdbc:hive2://hiveserver:10000> select `_hoodie_commit_time`, symbol, ts, volume, open, close from stock_ticks_mor_rt where symbol = 'GOOG';
  30. +----------------------+---------+----------------------+---------+------------+-----------+--+
  31. | _hoodie_commit_time | symbol | ts | volume | open | close |
  32. +----------------------+---------+----------------------+---------+------------+-----------+--+
  33. | 20180924064636 | GOOG | 2018-08-31 09:59:00 | 6330 | 1230.5 | 1230.02 |
  34. | 20180924070031 | GOOG | 2018-08-31 10:59:00 | 9021 | 1227.1993 | 1227.215 |
  35. +----------------------+---------+----------------------+---------+------------+-----------+--+
  36. # Incremental Query:
  37. 0: jdbc:hive2://hiveserver:10000> set hoodie.stock_ticks_mor.consume.mode=INCREMENTAL;
  38. No rows affected (0.008 seconds)
  39. # Max-Commits covers both second batch and compaction commit
  40. 0: jdbc:hive2://hiveserver:10000> set hoodie.stock_ticks_mor.consume.max.commits=3;
  41. No rows affected (0.007 seconds)
  42. 0: jdbc:hive2://hiveserver:10000> set hoodie.stock_ticks_mor.consume.start.timestamp=20180924064636;
  43. No rows affected (0.013 seconds)
  44. # Query:
  45. 0: jdbc:hive2://hiveserver:10000> select `_hoodie_commit_time`, symbol, ts, volume, open, close from stock_ticks_mor_ro where symbol = 'GOOG' and `_hoodie_commit_time` > '20180924064636';
  46. +----------------------+---------+----------------------+---------+------------+-----------+--+
  47. | _hoodie_commit_time | symbol | ts | volume | open | close |
  48. +----------------------+---------+----------------------+---------+------------+-----------+--+
  49. | 20180924070031 | GOOG | 2018-08-31 10:59:00 | 9021 | 1227.1993 | 1227.215 |
  50. +----------------------+---------+----------------------+---------+------------+-----------+--+
  51. exit

Step 10: Read Optimized and Snapshot queries for MOR with Spark-SQL after compaction

  1. docker exec -it adhoc-1 /bin/bash
  2. $SPARK_INSTALL/bin/spark-shell \
  3. --jars $HUDI_SPARK_BUNDLE \
  4. --driver-class-path $HADOOP_CONF_DIR \
  5. --conf spark.sql.hive.convertMetastoreParquet=false \
  6. --deploy-mode client \
  7. --driver-memory 1G \
  8. --master local[2] \
  9. --executor-memory 3G \
  10. --num-executors 1
  11. # Read Optimized Query
  12. scala> spark.sql("select symbol, max(ts) from stock_ticks_mor_ro group by symbol HAVING symbol = 'GOOG'").show(100, false)
  13. +---------+----------------------+
  14. | symbol | max(ts) |
  15. +---------+----------------------+
  16. | GOOG | 2018-08-31 10:59:00 |
  17. +---------+----------------------+
  18. 1 row selected (1.6 seconds)
  19. scala> spark.sql("select `_hoodie_commit_time`, symbol, ts, volume, open, close from stock_ticks_mor_ro where symbol = 'GOOG'").show(100, false)
  20. +----------------------+---------+----------------------+---------+------------+-----------+
  21. | _hoodie_commit_time | symbol | ts | volume | open | close |
  22. +----------------------+---------+----------------------+---------+------------+-----------+
  23. | 20180924064636 | GOOG | 2018-08-31 09:59:00 | 6330 | 1230.5 | 1230.02 |
  24. | 20180924070031 | GOOG | 2018-08-31 10:59:00 | 9021 | 1227.1993 | 1227.215 |
  25. +----------------------+---------+----------------------+---------+------------+-----------+
  26. # Snapshot Query
  27. scala> spark.sql("select symbol, max(ts) from stock_ticks_mor_rt group by symbol HAVING symbol = 'GOOG'").show(100, false)
  28. +---------+----------------------+
  29. | symbol | max(ts) |
  30. +---------+----------------------+
  31. | GOOG | 2018-08-31 10:59:00 |
  32. +---------+----------------------+
  33. scala> spark.sql("select `_hoodie_commit_time`, symbol, ts, volume, open, close from stock_ticks_mor_rt where symbol = 'GOOG'").show(100, false)
  34. +----------------------+---------+----------------------+---------+------------+-----------+
  35. | _hoodie_commit_time | symbol | ts | volume | open | close |
  36. +----------------------+---------+----------------------+---------+------------+-----------+
  37. | 20180924064636 | GOOG | 2018-08-31 09:59:00 | 6330 | 1230.5 | 1230.02 |
  38. | 20180924070031 | GOOG | 2018-08-31 10:59:00 | 9021 | 1227.1993 | 1227.215 |
  39. +----------------------+---------+----------------------+---------+------------+-----------+

Step 11: Presto Read Optimized queries on MOR table after compaction

Docker Demo - 图6note

This section of the demo is not supported for Mac AArch64 users at this time.

  1. docker exec -it presto-worker-1 presto --server presto-coordinator-1:8090
  2. presto> use hive.default;
  3. USE
  4. # Read Optimized Query
  5. resto:default> select symbol, max(ts) from stock_ticks_mor_ro group by symbol HAVING symbol = 'GOOG';
  6. symbol | _col1
  7. --------+---------------------
  8. GOOG | 2018-08-31 10:59:00
  9. (1 row)
  10. Query 20190822_182319_00011_segyw, FINISHED, 1 node
  11. Splits: 49 total, 49 done (100.00%)
  12. 0:01 [197 rows, 613B] [133 rows/s, 414B/s]
  13. presto:default> select "_hoodie_commit_time", symbol, ts, volume, open, close from stock_ticks_mor_ro where symbol = 'GOOG';
  14. _hoodie_commit_time | symbol | ts | volume | open | close
  15. ---------------------+--------+---------------------+--------+-----------+----------
  16. 20190822180250 | GOOG | 2018-08-31 09:59:00 | 6330 | 1230.5 | 1230.02
  17. 20190822181944 | GOOG | 2018-08-31 10:59:00 | 9021 | 1227.1993 | 1227.215
  18. (2 rows)
  19. Query 20190822_182333_00012_segyw, FINISHED, 1 node
  20. Splits: 17 total, 17 done (100.00%)
  21. 0:02 [197 rows, 613B] [98 rows/s, 307B/s]
  22. presto:default>

This brings the demo to an end.

Testing Hudi in Local Docker environment

You can bring up a Hadoop Docker environment containing Hadoop, Hive and Spark services with support for Hudi.

  1. $ mvn pre-integration-test -DskipTests

The above command builds Docker images for all the services with current Hudi source installed at /var/hoodie/ws and also brings up the services using a compose file. We currently use Hadoop (v2.8.4), Hive (v2.3.3) and Spark (v2.4.4) in Docker images.

To bring down the containers

  1. $ cd hudi-integ-test
  2. $ mvn docker-compose:down

If you want to bring up the Docker containers, use

  1. $ cd hudi-integ-test
  2. $ mvn docker-compose:up -DdetachedMode=true

Hudi is a library that is operated in a broader data analytics/ingestion environment involving Hadoop, Hive and Spark. Interoperability with all these systems is a key objective for us. We are actively adding integration-tests under hudi-integ-test/src/test/java that makes use of this docker environment (See hudi-integ-test/src/test/java/org/apache/hudi/integ/ITTestHoodieSanity.java )

Building Local Docker Containers:

The Docker images required for demo and running integration test are already in docker-hub. The Docker images and compose scripts are carefully implemented so that they serve dual-purpose

  1. The Docker images have inbuilt Hudi jar files with environment variable pointing to those jars (HUDI_HADOOP_BUNDLE, …)
  2. For running integration-tests, we need the jars generated locally to be used for running services within docker. The docker-compose scripts (see docker/compose/docker-compose_hadoop284_hive233_spark244.yml) ensures local jars override inbuilt jars by mounting local Hudi workspace over the Docker location
  3. As these Docker containers have mounted local Hudi workspace, any changes that happen in the workspace would automatically reflect in the containers. This is a convenient way for developing and verifying Hudi for developers who do not own a distributed environment. Note that this is how integration tests are run.

This helps avoid maintaining separate Docker images and avoids the costly step of building Hudi Docker images locally. But if users want to test Hudi from locations with lower network bandwidth, they can still build local images run the script docker/build_local_docker_images.sh to build local Docker images before running docker/setup_demo.sh

Here are the commands:

  1. cd docker
  2. ./build_local_docker_images.sh
  3. .....
  4. [INFO] Reactor Summary:
  5. [INFO]
  6. [INFO] Hudi ............................................... SUCCESS [ 2.507 s]
  7. [INFO] hudi-common ........................................ SUCCESS [ 15.181 s]
  8. [INFO] hudi-aws ........................................... SUCCESS [ 2.621 s]
  9. [INFO] hudi-timeline-service .............................. SUCCESS [ 1.811 s]
  10. [INFO] hudi-client ........................................ SUCCESS [ 0.065 s]
  11. [INFO] hudi-client-common ................................. SUCCESS [ 8.308 s]
  12. [INFO] hudi-hadoop-mr ..................................... SUCCESS [ 3.733 s]
  13. [INFO] hudi-spark-client .................................. SUCCESS [ 18.567 s]
  14. [INFO] hudi-sync-common ................................... SUCCESS [ 0.794 s]
  15. [INFO] hudi-hive-sync ..................................... SUCCESS [ 3.691 s]
  16. [INFO] hudi-spark-datasource .............................. SUCCESS [ 0.121 s]
  17. [INFO] hudi-spark-common_2.11 ............................. SUCCESS [ 12.979 s]
  18. [INFO] hudi-spark2_2.11 ................................... SUCCESS [ 12.516 s]
  19. [INFO] hudi-spark_2.11 .................................... SUCCESS [ 35.649 s]
  20. [INFO] hudi-utilities_2.11 ................................ SUCCESS [ 5.881 s]
  21. [INFO] hudi-utilities-bundle_2.11 ......................... SUCCESS [ 12.661 s]
  22. [INFO] hudi-cli ........................................... SUCCESS [ 19.858 s]
  23. [INFO] hudi-java-client ................................... SUCCESS [ 3.221 s]
  24. [INFO] hudi-flink-client .................................. SUCCESS [ 5.731 s]
  25. [INFO] hudi-spark3_2.12 ................................... SUCCESS [ 8.627 s]
  26. [INFO] hudi-dla-sync ...................................... SUCCESS [ 1.459 s]
  27. [INFO] hudi-sync .......................................... SUCCESS [ 0.053 s]
  28. [INFO] hudi-hadoop-mr-bundle .............................. SUCCESS [ 5.652 s]
  29. [INFO] hudi-hive-sync-bundle .............................. SUCCESS [ 1.623 s]
  30. [INFO] hudi-spark-bundle_2.11 ............................. SUCCESS [ 10.930 s]
  31. [INFO] hudi-presto-bundle ................................. SUCCESS [ 3.652 s]
  32. [INFO] hudi-timeline-server-bundle ........................ SUCCESS [ 4.804 s]
  33. [INFO] hudi-trino-bundle .................................. SUCCESS [ 5.991 s]
  34. [INFO] hudi-hadoop-docker ................................. SUCCESS [ 2.061 s]
  35. [INFO] hudi-hadoop-base-docker ............................ SUCCESS [ 53.372 s]
  36. [INFO] hudi-hadoop-base-java11-docker ..................... SUCCESS [ 48.545 s]
  37. [INFO] hudi-hadoop-namenode-docker ........................ SUCCESS [ 6.098 s]
  38. [INFO] hudi-hadoop-datanode-docker ........................ SUCCESS [ 4.825 s]
  39. [INFO] hudi-hadoop-history-docker ......................... SUCCESS [ 3.829 s]
  40. [INFO] hudi-hadoop-hive-docker ............................ SUCCESS [ 52.660 s]
  41. [INFO] hudi-hadoop-sparkbase-docker ....................... SUCCESS [01:02 min]
  42. [INFO] hudi-hadoop-sparkmaster-docker ..................... SUCCESS [ 12.661 s]
  43. [INFO] hudi-hadoop-sparkworker-docker ..................... SUCCESS [ 4.350 s]
  44. [INFO] hudi-hadoop-sparkadhoc-docker ...................... SUCCESS [ 59.083 s]
  45. [INFO] hudi-hadoop-presto-docker .......................... SUCCESS [01:31 min]
  46. [INFO] hudi-hadoop-trinobase-docker ....................... SUCCESS [02:40 min]
  47. [INFO] hudi-hadoop-trinocoordinator-docker ................ SUCCESS [ 14.003 s]
  48. [INFO] hudi-hadoop-trinoworker-docker ..................... SUCCESS [ 12.100 s]
  49. [INFO] hudi-integ-test .................................... SUCCESS [ 13.581 s]
  50. [INFO] hudi-integ-test-bundle ............................. SUCCESS [ 27.212 s]
  51. [INFO] hudi-examples ...................................... SUCCESS [ 8.090 s]
  52. [INFO] hudi-flink_2.11 .................................... SUCCESS [ 4.217 s]
  53. [INFO] hudi-kafka-connect ................................. SUCCESS [ 2.966 s]
  54. [INFO] hudi-flink-bundle_2.11 ............................. SUCCESS [ 11.155 s]
  55. [INFO] hudi-kafka-connect-bundle .......................... SUCCESS [ 12.369 s]
  56. [INFO] ------------------------------------------------------------------------
  57. [INFO] BUILD SUCCESS
  58. [INFO] ------------------------------------------------------------------------
  59. [INFO] Total time: 14:35 min
  60. [INFO] Finished at: 2022-01-12T18:41:27-08:00
  61. [INFO] ------------------------------------------------------------------------