Metadata Migration Shut down cluster services Exporting metadata Initializing the new metadata store Create database Update configuration Create Druid tables MySQL PostgreSQL ...
Apache Parquet Extension Apache Parquet Extension This Apache Druid module extends Druid Hadoop based indexing to ingest data directly from offline Apache Parquet files. Note...
Apache Parquet Extension Apache Parquet Extension This Apache Druid module extends Druid Hadoop based indexing to ingest data directly from offline Apache Parquet files. Note:...
Docker Prerequisites Getting started Compose file Configuration Launching the cluster Docker Memory Requirements Docker In this quickstart, we will download the Apache Dru...
Docker Prerequisites Getting started Compose file Configuration Launching the cluster Docker Memory Requirements Docker In this quickstart, we will download the Apache Dr...
Docker Prerequisites Getting started Compose file Configuration Launching the cluster Docker Memory Requirements Docker In this quickstart, we will download the Apache Dr...
Data management API Note for Coordinator’s POST and DELETE APIs Data management API This document describes the data management API endpoints for Apache Druid. This includes in...
Password providers Password providers Apache Druid needs some passwords for accessing various secured systems like metadata store, Key Store containing server certificates etc....
Using query caching Enabling query caching on Historicals Enabling query caching on task executor services Enabling query caching on Brokers Enabling caching in the query contex...
Realtime Process Realtime Process Older versions of Apache Druid supported a standalone ‘Realtime’ process to query and index ‘stream pull’ modes of real-time ingestion. These ...