Iceberg

Limitations

  1. Support Iceberg V1/V2.
  2. The V2 format only supports Position Delete, not Equality Delete.

Create Catalog

Create Catalog Based on Hive Metastore

It is basically the same as Hive Catalog, and only a simple example is given here. See Hive Catalog for other examples.

  1. CREATE CATALOG iceberg PROPERTIES (
  2. 'type'='hms',
  3. 'hive.metastore.uris' = 'thrift://172.21.0.1:7004',
  4. 'hadoop.username' = 'hive',
  5. 'dfs.nameservices'='your-nameservice',
  6. 'dfs.ha.namenodes.your-nameservice'='nn1,nn2',
  7. 'dfs.namenode.rpc-address.your-nameservice.nn1'='172.21.0.2:4007',
  8. 'dfs.namenode.rpc-address.your-nameservice.nn2'='172.21.0.3:4007',
  9. 'dfs.client.failover.proxy.provider.your-nameservice'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
  10. );

Create Catalog based on Iceberg API

Use the Iceberg API to access metadata, and support services such as Hadoop File System, Hive, REST, DLF and Glue as Iceberg’s Catalog.

Hadoop Catalog

  1. CREATE CATALOG iceberg_hadoop PROPERTIES (
  2. 'type'='iceberg',
  3. 'iceberg.catalog.type' = 'hadoop',
  4. 'warehouse' = 'hdfs://your-host:8020/dir/key'
  5. );
  1. CREATE CATALOG iceberg_hadoop_ha PROPERTIES (
  2. 'type'='iceberg',
  3. 'iceberg.catalog.type' = 'hadoop',
  4. 'warehouse' = 'hdfs://your-nameservice/dir/key',
  5. 'dfs.nameservices'='your-nameservice',
  6. 'dfs.ha.namenodes.your-nameservice'='nn1,nn2',
  7. 'dfs.namenode.rpc-address.your-nameservice.nn1'='172.21.0.2:4007',
  8. 'dfs.namenode.rpc-address.your-nameservice.nn2'='172.21.0.3:4007',
  9. 'dfs.client.failover.proxy.provider.your-nameservice'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
  10. );

Hive Metastore

  1. CREATE CATALOG iceberg PROPERTIES (
  2. 'type'='iceberg',
  3. 'iceberg.catalog.type'='hms',
  4. 'hive.metastore.uris' = 'thrift://172.21.0.1:7004',
  5. 'hadoop.username' = 'hive',
  6. 'dfs.nameservices'='your-nameservice',
  7. 'dfs.ha.namenodes.your-nameservice'='nn1,nn2',
  8. 'dfs.namenode.rpc-address.your-nameservice.nn1'='172.21.0.2:4007',
  9. 'dfs.namenode.rpc-address.your-nameservice.nn2'='172.21.0.3:4007',
  10. 'dfs.client.failover.proxy.provider.your-nameservice'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
  11. );

AWS Glue

  1. CREATE CATALOG glue PROPERTIES (
  2. "type"="iceberg",
  3. "iceberg.catalog.type" = "glue",
  4. "glue.endpoint" = "https://glue.us-east-1.amazonaws.com",
  5. "glue.access_key" = "ak",
  6. "glue.secret_key" = "sk"
  7. );

For Iceberg properties, see Iceberg Glue Catalog

Alibaba Cloud DLF

see Alibaba Cloud DLF Catalog

REST Catalog

This method needs to provide REST services in advance, and users need to implement the REST interface for obtaining Iceberg metadata.

  1. CREATE CATALOG iceberg PROPERTIES (
  2. 'type'='iceberg',
  3. 'iceberg.catalog.type'='rest',
  4. 'uri' = 'http://172.21.0.1:8181'
  5. );

If the data is on HDFS and High Availability (HA) is set up, need to add HA configuration to the Catalog.

  1. CREATE CATALOG iceberg PROPERTIES (
  2. 'type'='iceberg',
  3. 'iceberg.catalog.type'='rest',
  4. 'uri' = 'http://172.21.0.1:8181',
  5. 'dfs.nameservices'='your-nameservice',
  6. 'dfs.ha.namenodes.your-nameservice'='nn1,nn2',
  7. 'dfs.namenode.rpc-address.your-nameservice.nn1'='172.21.0.1:8020',
  8. 'dfs.namenode.rpc-address.your-nameservice.nn2'='172.21.0.2:8020',
  9. 'dfs.client.failover.proxy.provider.your-nameservice'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
  10. );

Google Dataproc Metastore

  1. CREATE CATALOG iceberg PROPERTIES (
  2. "type"="iceberg",
  3. "iceberg.catalog.type"="hms",
  4. "hive.metastore.uris" = "thrift://172.21.0.1:9083",
  5. "gs.endpoint" = "https://storage.googleapis.com",
  6. "gs.region" = "us-east-1",
  7. "gs.access_key" = "ak",
  8. "gs.secret_key" = "sk",
  9. "use_path_style" = "true"
  10. );

hive.metastore.uris: Dataproc Metastore URI,See in Metastore Services :Dataproc Metastore Services.

Iceberg On Object Storage

If the data is stored on S3, the following parameters can be used in properties:

  1. "s3.access_key" = "ak"
  2. "s3.secret_key" = "sk"
  3. "s3.endpoint" = "s3.us-east-1.amazonaws.com"
  4. "s3.region" = "us-east-1"

The data is stored on Alibaba Cloud OSS:

  1. "oss.access_key" = "ak"
  2. "oss.secret_key" = "sk"
  3. "oss.endpoint" = "oss-cn-beijing-internal.aliyuncs.com"
  4. "oss.region" = "oss-cn-beijing"

The data is stored on Tencent Cloud COS:

  1. "cos.access_key" = "ak"
  2. "cos.secret_key" = "sk"
  3. "cos.endpoint" = "cos.ap-beijing.myqcloud.com"
  4. "cos.region" = "ap-beijing"

The data is stored on Huawei Cloud OBS:

  1. "obs.access_key" = "ak"
  2. "obs.secret_key" = "sk"
  3. "obs.endpoint" = "obs.cn-north-4.myhuaweicloud.com"
  4. "obs.region" = "cn-north-4"

Column type mapping

Consistent with Hive Catalog, please refer to the column type mapping section in Hive Catalog.

Time Travel

Supports reading the snapshot specified by the Iceberg table.

Every write operation to the iceberg table will generate a new snapshot.

By default, read requests will only read the latest version of the snapshot.

You can use the FOR TIME AS OF and FOR VERSION AS OF statements to read historical versions of data based on the snapshot ID or the time when the snapshot was generated. Examples are as follows:

SELECT * FROM iceberg_tbl FOR TIME AS OF "2022-10-07 17:20:37";

SELECT * FROM iceberg_tbl FOR VERSION AS OF 868895038966572;

In addition, you can use the iceberg_meta table function to query the snapshot information of the specified table.