Introduction to Apache Druid
Apache Druid is a real-time analytics database designed for fast slice-and-dice analytics (“OLAP“ queries) on large data sets. Most often, Druid powers use cases where real-time ingestion, fast query performance, and high uptime are important.
Druid is commonly used as the database backend for GUIs of analytical applications, or for highly-concurrent APIs that need fast aggregations. Druid works best with event-oriented data.
Common application areas for Druid include:
Use Case | Description |
---|---|
Clickstream analytics | Analyze user behavior on websites and mobile applications to understand navigation patterns, popular content, and user engagement |
Network telemetry analytics | Monitor and analyze network traffic and performance metrics to optimize network efficiency, identify bottlenecks, and ensure quality of service |
Server metrics storage | Collect and store performance metrics such as CPU usage, memory usage, disk I/O, and network activity to monitor server health and optimize resource allocation |
Supply chain analytics | Use data from various stages of the supply chain to optimize inventory management, streamline logistics, forecast demand, and improve overall operational efficiency |
Application performance metrics | Monitor and analyze the performance of software applications to identify areas for improvement, troubleshoot issues, and ensure optimal user experience |
Digital marketing/advertising analytics | Track and analyze the effectiveness of digital marketing campaigns and advertising efforts across various channels, such as social media, search engines, and display ads |
Business intelligence (BI)/OLAP (Online Analytical Processing) | Use data analysis tools and techniques to gather insights from large datasets, generate reports, and make data-driven decisions to improve business operations and strategy |
Customer analytics | Analyze customer data to understand preferences, behavior, and purchasing patterns, enabling personalized marketing strategies, improved customer service, and customer retention efforts |
IoT (Internet of Things) analytics | Process and analyze data generated by IoT devices to gain insights into device performance, user behavior, and environmental conditions, facilitating automation, optimization, and predictive maintenance |
Financial analytics | Evaluate finance data to gauge financial performance, manage risk, detect fraud, and make informed investment decisions |
Healthcare analytics | Analyze healthcare data to improve patient outcomes, optimize healthcare delivery, reduce costs, and identify trends and patterns in diseases and treatments |
Social media analytics | Monitor and analyze social media activity, such as likes, shares, comments, and mentions, to understand audience sentiment, track brand perception, and identify influencers |
If you are experimenting with a new use case for Druid or have questions about Druid’s capabilities and features, join the Apache Druid Slack channel. There, you can connect with Druid experts, ask questions, and get help in real time.
Key features of Druid
Druid’s core architecture combines ideas from data warehouses, timeseries databases, and logsearch systems. Some of Druid’s key features are:
- Columnar storage format. Druid uses column-oriented storage. This means it only loads the exact columns needed for a particular query. This greatly improves speed for queries that retrieve only a few columns. Additionally, to support fast scans and aggregations, Druid optimizes column storage for each column according to its data type.
- Scalable distributed system. Typical Druid deployments span clusters ranging from tens to hundreds of servers. Druid can ingest data at the rate of millions of records per second while retaining trillions of records and maintaining query latencies ranging from the sub-second to a few seconds.
- Massively parallel processing. Druid can process each query in parallel across the entire cluster.
- Realtime or batch ingestion. Druid can ingest data either real-time or in batches. Ingested data is immediately available for querying.
- Self-healing, self-balancing, easy to operate. As an operator, you add servers to scale out or remove servers to scale down. The Druid cluster re-balances itself automatically in the background without any downtime. If a Druid server fails, the system automatically routes data around the damage until the server can be replaced. Druid is designed to run continuously without planned downtime for any reason. This is true for configuration changes and software updates.
- Cloud-native, fault-tolerant architecture that won’t lose data. After ingestion, Druid safely stores a copy of your data in deep storage. Deep storage is typically cloud storage, HDFS, or a shared filesystem. You can recover your data from deep storage even in the unlikely case that all Druid servers fail. For a limited failure that affects only a few Druid servers, replication ensures that queries are still possible during system recoveries.
- Indexes for quick filtering. Druid uses Roaring or CONCISE compressed bitmap indexes to create indexes to enable fast filtering and searching across multiple columns.
- Time-based partitioning. Druid first partitions data by time. You can optionally implement additional partitioning based upon other fields. Time-based queries only access the partitions that match the time range of the query which leads to significant performance improvements.
- Approximate algorithms. Druid includes algorithms for approximate count-distinct, approximate ranking, and computation of approximate histograms and quantiles. These algorithms offer bounded memory usage and are often substantially faster than exact computations. For situations where accuracy is more important than speed, Druid also offers exact count-distinct and exact ranking.
- Automatic summarization at ingest time. Druid optionally supports data summarization at ingestion time. This summarization partially pre-aggregates your data, potentially leading to significant cost savings and performance boosts.
When to use Druid
Druid is used by many companies of various sizes for many different use cases. For more information see Powered by Apache Druid.
Druid is likely a good choice if your use case matches a few of the following:
- Insert rates are very high, but updates are less common.
- Most of your queries are aggregation and reporting queries. For example “group by” queries. You may also have searching and scanning queries.
- You are targeting query latencies of 100ms to a few seconds.
- Your data has a time component. Druid includes optimizations and design choices specifically related to time.
- You may have more than one table, but each query hits just one big distributed table. Queries may potentially hit more than one smaller “lookup” table.
- You have high cardinality data columns, e.g. URLs, user IDs, and need fast counting and ranking over them.
- You want to load data from Kafka, HDFS, flat files, or object storage like Amazon S3.
Situations where you would likely not want to use Druid include:
- You need low-latency updates of existing records using a primary key. Druid supports streaming inserts, but not streaming updates. You can perform updates using background batch jobs.
- You are building an offline reporting system where query latency is not very important.
- You want to do “big” joins, meaning joining one big fact table to another big fact table, and you are okay with these queries taking a long time to complete.
Learn more
- Try the Druid Quickstart.
- Learn more about Druid components in Design.
- Read about new features and improvements in Druid Releases.