Real-Time Dashboards

Citus provides real-time queries over large datasets. One workload we commonly see at Citus involves powering real-time dashboards of event data.

For example, you could be a cloud services provider helping other businesses monitor their HTTP traffic. Every time one of your clients receives an HTTP request your service receives a log record. You want to ingest all those records and create an HTTP analytics dashboard which gives your clients insights such as the number HTTP errors their sites served. It’s important that this data shows up with as little latency as possible so your clients can fix problems with their sites. It’s also important for the dashboard to show graphs of historical trends.

Alternatively, maybe you’re building an advertising network and want to show clients clickthrough rates on their campaigns. In this example latency is also critical, raw data volume is also high, and both historical and live data are important.

In this section we’ll demonstrate how to build part of the first example, but this architecture would work equally well for the second and many other use-cases.

Data Model

The data we’re dealing with is an immutable stream of log data. We’ll insert directly into Citus but it’s also common for this data to first be routed through something like Kafka. Doing so has the usual advantages, and makes it easier to pre-aggregate the data once data volumes become unmanageably high.

We’ll use a simple schema for ingesting HTTP event data. This schema serves as an example to demonstrate the overall architecture; a real system might use additional columns.

  1. -- this is run on the coordinator
  2. CREATE TABLE http_request (
  3. site_id INT,
  4. ingest_time TIMESTAMPTZ DEFAULT now(),
  5. url TEXT,
  6. request_country TEXT,
  7. ip_address TEXT,
  8. status_code INT,
  9. response_time_msec INT
  10. );
  11. SELECT create_distributed_table('http_request', 'site_id');

When we call create_distributed_table we ask Citus to hash-distribute http_request using the site_id column. That means all the data for a particular site will live in the same shard.

The UDF uses the default configuration values for shard count. We recommend using 2-4x as many shards as CPU cores in your cluster. Using this many shards lets you rebalance data across your cluster after adding new worker nodes.

Note

Azure Database for PostgreSQL — Hyperscale (Citus) uses streaming replication to achieve high availability and thus maintaining shard replicas would be redundant. In any production environment where streaming replication is unavailable, you should set citus.shard_replication_factor to 2 or higher for fault tolerance.

With this, the system is ready to accept data and serve queries! Keep the following loop running in a psql console in the background while you continue with the other commands in this article. It generates fake data every second or two.

  1. DO $$
  2. BEGIN LOOP
  3. INSERT INTO http_request (
  4. site_id, ingest_time, url, request_country,
  5. ip_address, status_code, response_time_msec
  6. ) VALUES (
  7. trunc(random()*32), clock_timestamp(),
  8. concat('http://example.com/', md5(random()::text)),
  9. ('{China,India,USA,Indonesia}'::text[])[ceil(random()*4)],
  10. concat(
  11. trunc(random()*250 + 2), '.',
  12. trunc(random()*250 + 2), '.',
  13. trunc(random()*250 + 2), '.',
  14. trunc(random()*250 + 2)
  15. )::inet,
  16. ('{200,404}'::int[])[ceil(random()*2)],
  17. 5+trunc(random()*150)
  18. );
  19. COMMIT;
  20. PERFORM pg_sleep(random() * 0.25);
  21. END LOOP;
  22. END $$;

Once you’re ingesting data, you can run dashboard queries such as:

  1. SELECT
  2. site_id,
  3. date_trunc('minute', ingest_time) as minute,
  4. COUNT(1) AS request_count,
  5. SUM(CASE WHEN (status_code between 200 and 299) THEN 1 ELSE 0 END) as success_count,
  6. SUM(CASE WHEN (status_code between 200 and 299) THEN 0 ELSE 1 END) as error_count,
  7. SUM(response_time_msec) / COUNT(1) AS average_response_time_msec
  8. FROM http_request
  9. WHERE date_trunc('minute', ingest_time) > now() - '5 minutes'::interval
  10. GROUP BY site_id, minute
  11. ORDER BY minute ASC;

The setup described above works, but has two drawbacks:

  • Your HTTP analytics dashboard must go over each row every time it needs to generate a graph. For example, if your clients are interested in trends over the past year, your queries will aggregate every row for the past year from scratch.

  • Your storage costs will grow proportionally with the ingest rate and the length of the queryable history. In practice, you may want to keep raw events for a shorter period of time (one month) and look at historical graphs over a longer time period (years).

Rollups

You can overcome both drawbacks by rolling up the raw data into a pre-aggregated form. Here, we’ll aggregate the raw data into a table which stores summaries of 1-minute intervals. In a production system, you would probably also want something like 1-hour and 1-day intervals, these each correspond to zoom-levels in the dashboard. When the user wants request times for the last month the dashboard can simply read and chart the values for each of the last 30 days.

  1. CREATE TABLE http_request_1min (
  2. site_id INT,
  3. ingest_time TIMESTAMPTZ, -- which minute this row represents
  4. error_count INT,
  5. success_count INT,
  6. request_count INT,
  7. average_response_time_msec INT,
  8. CHECK (request_count = error_count + success_count),
  9. CHECK (ingest_time = date_trunc('minute', ingest_time))
  10. );
  11. SELECT create_distributed_table('http_request_1min', 'site_id');
  12. CREATE INDEX http_request_1min_idx ON http_request_1min (site_id, ingest_time);

This looks a lot like the previous code block. Most importantly: It also shards on site_id and uses the same default configuration for shard count and replication factor. Because all three of those match, there’s a 1-to-1 correspondence between http_request shards and http_request_1min shards, and Citus will place matching shards on the same worker. This is called co-location; it makes queries such as joins faster and our rollups possible.

co-location in citus

In order to populate http_request_1min we’re going to periodically run an INSERT INTO SELECT. This is possible because the tables are co-located. The following function wraps the rollup query up for convenience.

  1. -- single-row table to store when we rolled up last
  2. CREATE TABLE latest_rollup (
  3. minute timestamptz PRIMARY KEY,
  4. -- "minute" should be no more precise than a minute
  5. CHECK (minute = date_trunc('minute', minute))
  6. );
  7. -- initialize to a time long ago
  8. INSERT INTO latest_rollup VALUES ('10-10-1901');
  9. -- function to do the rollup
  10. CREATE OR REPLACE FUNCTION rollup_http_request() RETURNS void AS $$
  11. DECLARE
  12. curr_rollup_time timestamptz := date_trunc('minute', now());
  13. last_rollup_time timestamptz := minute from latest_rollup;
  14. BEGIN
  15. INSERT INTO http_request_1min (
  16. site_id, ingest_time, request_count,
  17. success_count, error_count, average_response_time_msec
  18. ) SELECT
  19. site_id,
  20. date_trunc('minute', ingest_time),
  21. COUNT(1) as request_count,
  22. SUM(CASE WHEN (status_code between 200 and 299) THEN 1 ELSE 0 END) as success_count,
  23. SUM(CASE WHEN (status_code between 200 and 299) THEN 0 ELSE 1 END) as error_count,
  24. SUM(response_time_msec) / COUNT(1) AS average_response_time_msec
  25. FROM http_request
  26. -- roll up only data new since last_rollup_time
  27. WHERE date_trunc('minute', ingest_time) <@
  28. tstzrange(last_rollup_time, curr_rollup_time, '(]')
  29. GROUP BY 1, 2;
  30. -- update the value in latest_rollup so that next time we run the
  31. -- rollup it will operate on data newer than curr_rollup_time
  32. UPDATE latest_rollup SET minute = curr_rollup_time;
  33. END;
  34. $$ LANGUAGE plpgsql;

Note

The above function should be called every minute. You could do this by adding a crontab entry on the coordinator node:

  1. * * * * * psql -c 'SELECT rollup_http_request();'

Alternatively, an extension such as pg_cron allows you to schedule recurring queries directly from the database.

The dashboard query from earlier is now a lot nicer:

  1. SELECT site_id, ingest_time as minute, request_count,
  2. success_count, error_count, average_response_time_msec
  3. FROM http_request_1min
  4. WHERE ingest_time > date_trunc('minute', now()) - '5 minutes'::interval;

Expiring Old Data

The rollups make queries faster, but we still need to expire old data to avoid unbounded storage costs. Simply decide how long you’d like to keep data for each granularity, and use standard queries to delete expired data. In the following example, we decided to keep raw data for one day, and per-minute aggregations for one month:

  1. DELETE FROM http_request WHERE ingest_time < now() - interval '1 day';
  2. DELETE FROM http_request_1min WHERE ingest_time < now() - interval '1 month';

In production you could wrap these queries in a function and call it every minute in a cron job.

Data expiration can go even faster by using table range partitioning on top of Citus hash distribution. See the Timeseries Data section for a detailed example.

Those are the basics! We provided an architecture that ingests HTTP events and then rolls up these events into their pre-aggregated form. This way, you can both store raw events and also power your analytical dashboards with subsecond queries.

The next sections extend upon the basic architecture and show you how to resolve questions which often appear.

Approximate Distinct Counts

A common question in HTTP analytics deals with approximate distinct counts: How many unique visitors visited your site over the last month? Answering this question exactly requires storing the list of all previously-seen visitors in the rollup tables, a prohibitively large amount of data. However, an approximate answer is much more manageable.

A datatype called hyperloglog, or HLL, can answer the query approximately; it takes a surprisingly small amount of space to tell you approximately how many unique elements are in a set. Its accuracy can be adjusted. We’ll use ones which, using only 1280 bytes, will be able to count up to tens of billions of unique visitors with at most 2.2% error.

An equivalent problem appears if you want to run a global query, such as the number of unique IP addresses which visited any of your client’s sites over the last month. Without HLLs this query involves shipping lists of IP addresses from the workers to the coordinator for it to deduplicate. That’s both a lot of network traffic and a lot of computation. By using HLLs you can greatly improve query speed.

First you must install the HLL extension; the github repo has instructions. Next, you have to enable it:

  1. CREATE EXTENSION hll;

Note

This is not necessary on Hyperscale, which has HLL already installed, along with other useful extensions.

Now we’re ready to track IP addresses in our rollup with HLL. First add a column to the rollup table.

  1. ALTER TABLE http_request_1min ADD COLUMN distinct_ip_addresses hll;

Next use our custom aggregation to populate the column. Just add it to the query in our rollup function:

  1. @@ -1,10 +1,12 @@
  2. INSERT INTO http_request_1min (
  3. site_id, ingest_time, request_count,
  4. success_count, error_count, average_response_time_msec
  5. + , distinct_ip_addresses
  6. ) SELECT
  7. site_id,
  8. minute,
  9. COUNT(1) as request_count,
  10. SUM(CASE WHEN (status_code between 200 and 299) THEN 1 ELSE 0 END) as success_count,
  11. SUM(CASE WHEN (status_code between 200 and 299) THEN 0 ELSE 1 END) as error_count,
  12. SUM(response_time_msec) / COUNT(1) AS average_response_time_msec
  13. + , hll_add_agg(hll_hash_text(ip_address)) AS distinct_ip_addresses
  14. FROM http_request

Dashboard queries are a little more complicated, you have to read out the distinct number of IP addresses by calling the hll_cardinality function:

  1. SELECT site_id, ingest_time as minute, request_count,
  2. success_count, error_count, average_response_time_msec,
  3. hll_cardinality(distinct_ip_addresses) AS distinct_ip_address_count
  4. FROM http_request_1min
  5. WHERE ingest_time > date_trunc('minute', now()) - interval '5 minutes';

HLLs aren’t just faster, they let you do things you couldn’t previously. Say we did our rollups, but instead of using HLLs we saved the exact unique counts. This works fine, but you can’t answer queries such as “how many distinct sessions were there during this one-week period in the past we’ve thrown away the raw data for?”.

With HLLs, this is easy. You can compute distinct IP counts over a time period with the following query:

  1. SELECT hll_cardinality(hll_union_agg(distinct_ip_addresses))
  2. FROM http_request_1min
  3. WHERE ingest_time > date_trunc('minute', now()) - '5 minutes'::interval;

You can find more information on HLLs in the project’s GitHub repository.

Unstructured Data with JSONB

Citus works well with Postgres’ built-in support for unstructured data types. To demonstrate this, let’s keep track of the number of visitors which came from each country. Using a semi-structure data type saves you from needing to add a column for every individual country and ending up with rows that have hundreds of sparsely filled columns. We have a blog post explaining which format to use for your semi-structured data. The post recommends JSONB, here we’ll demonstrate how to incorporate JSONB columns into your data model.

First, add the new column to our rollup table:

  1. ALTER TABLE http_request_1min ADD COLUMN country_counters JSONB;

Next, include it in the rollups by modifying the rollup function:

  1. @@ -1,14 +1,19 @@
  2. INSERT INTO http_request_1min (
  3. site_id, ingest_time, request_count,
  4. success_count, error_count, average_response_time_msec
  5. + , country_counters
  6. ) SELECT
  7. site_id,
  8. minute,
  9. COUNT(1) as request_count,
  10. SUM(CASE WHEN (status_code between 200 and 299) THEN 1 ELSE 0 END) as success_count
  11. SUM(CASE WHEN (status_code between 200 and 299) THEN 0 ELSE 1 END) as error_count
  12. SUM(response_time_msec) / COUNT(1) AS average_response_time_msec
  13. - FROM http_request
  14. + , jsonb_object_agg(request_country, country_count) AS country_counters
  15. + FROM (
  16. + SELECT *,
  17. + count(1) OVER (
  18. + PARTITION BY site_id, date_trunc('minute', ingest_time), request_country
  19. + ) AS country_count
  20. + FROM http_request
  21. + ) h

Now, if you want to get the number of requests which came from America in your dashboard, you can modify the dashboard query to look like this:

  1. SELECT
  2. request_count, success_count, error_count, average_response_time_msec,
  3. COALESCE(country_counters->>'USA', '0')::int AS american_visitors
  4. FROM http_request_1min
  5. WHERE ingest_time > date_trunc('minute', now()) - '5 minutes'::interval;