Alibaba Cloud DLF

Data Lake Formation (DLF) is the unified metadata management service of Alibaba Cloud. It is compatible with the Hive Metastore protocol.

What is DLF

Doris can access DLF the same way as it accesses Hive Metastore.

Connect to DLF

Create a DLF Catalog.

  1. CREATE CATALOG dlf PROPERTIES (
  2. "type"="hms",
  3. "hive.metastore.type" = "dlf",
  4. "dlf.proxy.mode" = "DLF_ONLY",
  5. "dlf.endpoint" = "datalake-vpc.cn-beijing.aliyuncs.com",
  6. "dlf.region" = "cn-beijing",
  7. "dlf.uid" = "uid",
  8. "dlf.catalog.id" = "catalog_id", //optional
  9. "dlf.access_key" = "ak",
  10. "dlf.secret_key" = "sk"
  11. );

type should always be hms. If you need to access Alibaba Cloud OSS on the public network, can add "dlf.access.public"="true".

  • dlf.endpoint: DLF Endpoint. See Regions and Endpoints of DLF.
  • dlf.region: DLF Region. See Regions and Endpoints of DLF.
  • dlf.uid: Alibaba Cloud account. You can find the “Account ID” in the upper right corner on the Alibaba Cloud console.
  • dlf.catalog.id: Optional. Used to specify the dlf catalog, if not specified, the default Catalog ID will be used.
  • dlf.access_key:AccessKey, which you can create and manage on the Alibaba Cloud console.
  • dlf.secret_key:SecretKey, which you can create and manage on the Alibaba Cloud console.

Other configuration items are fixed and require no modifications.

After the above steps, you can access metadata in DLF the same way as you access Hive MetaStore.

Doris supports accessing Hive/Iceberg/Hudi metadata in DLF.

Use OSS-HDFS as the datasource

  1. Enable OSS-HDFS. Grant access to OSS or OSS-HDFS

  2. Download the SDK. JindoData SDK. If the Jindo SDK directory already exists on the cluster, skip this step.

  3. Decompress the jindosdk.tar.gz or locate the Jindo SDK directory on the cluster, and then enter its lib directory and put jindo-core.jar, jindo-sdk.jar to both ${DORIS_HOME}/fe/lib and ${DORIS_HOME}/be/lib/java_extensions/preload-extensions.

  4. Create DLF Catalog, set oss.hdfs.enabled as true

    1. CREATE CATALOG dlf_oss_hdfs PROPERTIES (
    2. "type"="hms",
    3. "hive.metastore.type" = "dlf",
    4. "dlf.proxy.mode" = "DLF_ONLY",
    5. "dlf.endpoint" = "datalake-vpc.cn-beijing.aliyuncs.com",
    6. "dlf.region" = "cn-beijing",
    7. "dlf.uid" = "uid",
    8. "dlf.catalog.id" = "catalog_id", //optional
    9. "dlf.access_key" = "ak",
    10. "dlf.secret_key" = "sk",
    11. "oss.hdfs.enabled" = "true"
    12. );
  5. When the Jindo SDK version is inconsistent with the version used on the EMR cluster, will reported Plugin not found and the Jindo SDK needs to be replaced with the corresponding version.

DLF Iceberg Catalog

  1. CREATE CATALOG dlf_iceberg PROPERTIES (
  2. "type"="iceberg",
  3. "iceberg.catalog.type" = "dlf",
  4. "dlf.proxy.mode" = "DLF_ONLY",
  5. "dlf.endpoint" = "datalake-vpc.cn-beijing.aliyuncs.com",
  6. "dlf.region" = "cn-beijing",
  7. "dlf.uid" = "uid",
  8. "dlf.catalog.id" = "catalog_id", //optional
  9. "dlf.access_key" = "ak",
  10. "dlf.secret_key" = "sk"
  11. );

Column type mapping

Consistent with Hive Catalog, please refer to the column type mapping section in Hive Catalog.