书栈网 · BookStack 本次搜索耗时 0.055 秒,为您找到 621 个相关结果.
  • Google Cloud Storage

    GCS Filesystem GCS Configs GCS Credentials GCS Libs GCS Filesystem For Hudi storage on GCS, regional buckets provide an DFS API with strong consistency. GCS Configs Ther...
  • Google Cloud

    Google Cloud GCS Configs GCS Credentials GCS Libs Google Cloud For Hudi storage on GCS, regional buckets provide an DFS API with strong consistency. GCS Configs There are ...
  • Streaming Writes

    Streaming Writes Spark Streaming Streaming Writes Spark Streaming You can write Hudi tables using spark’s structured streaming. Scala Python // spark-shell // prepare to ...
  • AWS S3

    AWS S3 AWS configs AWS Credentials AWS Libs AWS S3 Versioned Bucket AWS S3 In this page, we explain how to get your Hudi spark job to store into AWS S3. AWS configs There ...
  • IBM Cloud

    IBM Cloud IBM COS configs IBM Cloud Object Storage Credentials IBM Cloud Object Storage Libs IBM Cloud In this page, we explain how to get your Hudi spark job to store into I...
  • Hive Metastore

    Hive Metastore Spark Data Source example Query using HiveQL Use partition extractor properly Hive Sync Tool Hive Sync Configuration Sync modes HMS JDBC HIVEQL Flink Setup ...
  • Metadata Indexing

    Metadata Indexing Setup Async Indexing Configurations Schedule indexing Execute Indexing Drop Index Caveats Related Resources Videos Metadata Indexing Hudi maintains a s...
  • SQL DDL

    SQL DDL Spark SQL Create table Create non-partitioned table Create partitioned table Create table with record keys and ordering fields Create table from an external location C...
  • Docker Demo

    Docker Demo A Demo using Docker containers Prerequisites Setting up Docker Cluster Build Hudi Bringing up Demo Cluster Demo Step 1 : Publish the first batch to Kafka Step 2: ...
  • 快速开始

    Quick-Start Guide 设置spark-shell 插入数据 查询数据 更新数据 增量查询 特定时间点查询 从这开始下一步? Quick-Start Guide 本指南通过使用spark-shell简要介绍了Hudi功能。使用Spark数据源,我们将通过代码段展示如何插入和更新Hudi的默认存储类型数据集: 写时复制 。每次...