- Troubleshooting
- Hypertable retention policy isn’t applying to continuous aggregates
- Continuous aggregate doesn’t refresh with newly inserted historical data
- Permissions error when migrating a continuous aggregate
- Queries fail when defining continuous aggregates but work on regular tables
- Updates to previously materialized regions aren’t shown in real-time aggregates
- Cannot refresh compressed chunks of a continuous aggregate
- Queries using
locf()
don’t treatNULL
values as missing - Scheduled jobs stop running
Troubleshooting
This section contains some ideas for troubleshooting common problems experienced with continuous aggregates.
Failed to start a background worker
`
"<TYPE_OF_BACKGROUND_JOB>": failed to start a background worker
`
You might see this error message in the logs if background workers aren’t properly configured.
To fix this error, make sure that max_worker_processes
, max_parallel_workers
, and timescaledb.max_background_workers
are properly set. timescaledb.max_background_workers
should equal the number of databases plus the number of concurrent background workers. max_worker_processes
should equal the sum of timescaledb.max_background_workers
and max_parallel_workers
.
For more information, see the worker configuration docs.
Continuous aggregate watermark is in the future
Continuous aggregates use a watermark to indicate which time buckets have already been materialized. When you query a continuous aggregate, your query returns materialized data from before the watermark. It returns real-time, non-materialized data from after the watermark.
In certain cases, the watermark might be in the future. If this happens, all buckets, including the most recent bucket, are materialized and below the watermark. No real-time data is returned.
This might happen if you refresh your continuous aggregate over the time window <START_TIME>, NULL
, which materializes all recent data. It might also happen if you create a continuous aggregate using the WITH DATA
option. This also implicitly refreshes your continuous aggregate with a window of NULL, NULL
.
To fix this, create a new continuous aggregate using the WITH NO DATA
option. Then use a policy to refresh this continuous aggregate over an explicit time window.
Creating a new continuous aggregate with an explicit refresh window
Create a continuous aggregate using the
WITH NO DATA
option:CREATE MATERIALIZED VIEW <continuous_aggregate_name>
WITH (timescaledb.continuous)
AS SELECT time_bucket('<interval>', <time_column>),
<other_columns_to_select>,
...
FROM <hypertable>
GROUP BY bucket, <optional_other_columns>
WITH NO DATA;
Refresh the continuous aggregate using a policy with an explicit
end_offset
. For example:SELECT add_continuous_aggregate_policy('<continuous_aggregate_name>',
start_offset => INTERVAL '30 day',
end_offset => INTERVAL '1 hour',
schedule_interval => INTERVAL '1 hour');
Check your new continuous aggregate’s watermark to make sure it is in the past, not the future.
Get the ID for the materialization hypertable that contains the actual continuous aggregate data:
SELECT id from _timescaledb_catalog.hypertable
WHERE table_name=(
SELECT materialization_hypertable_name
FROM timescaledb_information.continuous_aggregates
WHERE view_name='<continuous_aggregate_name'
);
Use the returned ID to query for the watermark’s timestamp:
SELECT COALESCE(
_timescaledb_internal.to_timestamp(_timescaledb_internal.cagg_watermark(<ID>)),
'-infinity'::timestamp with time zone
);
warning
If you choose to delete your old continuous aggregate after creating a new one, beware of historical data loss. If your old continuous aggregate contained data that you dropped from your original hypertable, for example through a data retention policy, the dropped data is not included in your new continuous aggregate.
Hypertable retention policy isn’t applying to continuous aggregates
A retention policy set on a hypertable does not apply to any continuous aggregates made from the hypertable. This allows you to set different retention periods for raw and summarized data. To apply a retention policy to a continuous aggregate, set the policy on the continuous aggregate itself.
Continuous aggregate doesn’t refresh with newly inserted historical data
Materialized views are generally used with ordered data. If you insert historic data, or data that is not related to the current time, you need to refresh policies and reevaluate the values that are dragging from past to present.
You can set up an after insert rule for your hypertable or upsert to trigger something that can validate what needs to be refreshed as the data is merged.
Let’s say you inserted ordered timeframes named A, B, D, and F, and you already have a continuous aggregation looking for this data. If you now insert E, you need to refresh E and F. However, if you insert C we’ll need to refresh C, D, E and F.
For example:
- A, B, D, and F are already materialized in a view with all data.
- To insert C, split the data into
AB
andDEF
subsets. AB
are consistent and the materialized data is too; you only need to reuse it.- Insert C,
DEF
, and refresh policies after C.
This can use a lot of resources to process, especially if you have any important data in the past that also needs to be brought to the present.
Consider an example where you have 300 columns on a single hypertable and use, for example, five of them in a continuous aggregation. In this case, it could be hard to refresh and would make more sense to isolate these columns in another hypertable. Alternatively, you might create one hypertable per metric and refresh them independently.
Permissions error when migrating a continuous aggregate
You might get a permissions error when migrating a continuous aggregate from old to new format using cagg_migrate
. The user performing the migration must have the following permissions:
- Select, insert, and update permissions on the tables
_timescale_catalog.continuous_agg_migrate_plan
and_timescale_catalog.continuous_agg_migrate_plan_step
- Usage permissions on the sequence
_timescaledb_catalog.continuous_agg_migrate_plan_step_step_id_seq
To solve the problem, change to a user capable of granting permissions, and grant the following permissions to the user performing the migration:
GRANT SELECT, INSERT, UPDATE ON TABLE _timescaledb_catalog.continuous_agg_migrate_plan TO <USER>;
GRANT SELECT, INSERT, UPDATE ON TABLE _timescaledb_catalog.continuous_agg_migrate_plan_step TO <USER>;
GRANT USAGE ON SEQUENCE _timescaledb_catalog.continuous_agg_migrate_plan_step_step_id_seq TO <USER>;
Queries fail when defining continuous aggregates but work on regular tables
`
ERROR: invalid continuous aggregate view
`
`
SQL state: 0A000
`
Continuous aggregates don’t work on all queries. If you are using a function that continuous aggregates do not support, you see the error above.
TimescaleDB doesn’t support window functions or JOIN
clauses on continuous aggregates. In versions earlier than 2.7, it doesn’t support any non-parallelizable SQL aggregates.
This table summarizes aggregate function support in continuous aggregates:
Function, clause, or feature | TimescaleDB 2.6 and earlier | TimescaleDB 2.7 and above |
---|---|---|
Parallelizable aggregate functions | ✅ | ✅ |
Non-parallelizable aggregate functions | ❌ | ✅ |
ORDER BY | ❌ | ✅ |
Ordered-set aggregates | ❌ | ✅ |
Hypothetical-set aggregates | ❌ | ✅ |
DISTINCT in aggregate functions | ❌ | ✅ |
FILTER in aggregate functions | ❌ | ✅ |
Updates to previously materialized regions aren’t shown in real-time aggregates
Real-time aggregates automatically add the most recent data when you query your continuous aggregate. In other words, they include data more recent than your last materialized bucket.
If you add new historical data to an already-materialized bucket, it won’t be reflected in a real-time aggregate. You should wait for the next scheduled refresh, or manually refresh by calling refresh_continuous_aggregate
. You can think of real-time aggregates as being eventually consistent for historical data.
The following example shows how this works.
Create and fill the hypertable:
CREATE TABLE conditions(
day DATE NOT NULL,
city text NOT NULL,
temperature INT NOT NULL);
SELECT create_hypertable(
'conditions', 'day',
chunk_time_interval => INTERVAL '1 day'
);
INSERT INTO conditions (day, city, temperature) VALUES
('2021-06-14', 'Moscow', 26),
('2021-06-15', 'Moscow', 22),
('2021-06-16', 'Moscow', 24),
('2021-06-17', 'Moscow', 24),
('2021-06-18', 'Moscow', 27),
('2021-06-19', 'Moscow', 28),
('2021-06-20', 'Moscow', 30),
('2021-06-21', 'Moscow', 31),
('2021-06-22', 'Moscow', 34),
('2021-06-23', 'Moscow', 34),
('2021-06-24', 'Moscow', 34),
('2021-06-25', 'Moscow', 32),
('2021-06-26', 'Moscow', 32),
('2021-06-27', 'Moscow', 31);
Create a continuous aggregate but do not materialize any data. Note that real time aggregation is enabled by default:
CREATE MATERIALIZED VIEW conditions_summary
WITH (timescaledb.continuous) AS
SELECT city,
time_bucket('7 days', day) AS bucket,
MIN(temperature),
MAX(temperature)
FROM conditions
GROUP BY city, bucket
WITH NO DATA;
The select query returns data as real time aggregates are enabled. The query on
the continuous aggregate fetches data directly from the hypertable:
SELECT * FROM conditions_summary ORDER BY bucket;
city | bucket | min | max
--------+------------+-----+-----
Moscow | 2021-06-14 | 22 | 30
Moscow | 2021-06-21 | 31 | 34
Materialize data into the continuous aggregate:
CALL refresh_continuous_aggregate('conditions_summary', '2021-06-14', '2021-06-21');
The select query returns the same data, as expected, but this time the data is
fetched from the underlying materialized table
SELECT * FROM conditions_summary ORDER BY bucket;
city | bucket | min | max
--------+------------+-----+-----
Moscow | 2021-06-14 | 22 | 30
Moscow | 2021-06-21 | 31 | 34
Update the data in the previously materialized bucket:
UPDATE conditions
SET temperature = 35
WHERE day = '2021-06-14' and city = 'Moscow';
The updated data is not yet visible when you query the continuous aggregate. This is because these changes have not been materialized.( Similarly, any INSERTs or DELETEs would also not be visible).
SELECT * FROM conditions_summary ORDER BY bucket;
city | bucket | min | max
--------+------------+-----+-----
Moscow | 2021-06-14 | 22 | 30
Moscow | 2021-06-21 | 31 | 34
Refresh the data again to update the previously materialized region:
CALL refresh_continuous_aggregate('conditions_summary', '2021-06-14', '2021-06-21');
SELECT * FROM conditions_summary ORDER BY bucket;
city | bucket | min | max
--------+------------+-----+-----
Moscow | 2021-06-14 | 22 | 35
Moscow | 2021-06-21 | 31 | 34
Cannot refresh compressed chunks of a continuous aggregate
`
ERROR: cannot update/delete rows from chunk <CHUNK_NAME> as it is compressed
`
Compressed chunks of a continuous aggregate can’t be refreshed. This follows from a general limitation where compressed chunks can’t be updated or deleted.
If you receive historical data and must refresh a compressed region, first decompress the chunk. Then manually run refresh_continuous_aggregate.
Queries using locf()
don’t treat NULL
values as missing
When you have a query that uses a last observation carried forward (locf) function, the query carries forward NULL values by default. If you want the function to ignore NULL values instead, you can set treat_null_as_missing=TRUE
as the second parameter in the query. For example:
dev=# select * FROM (select time_bucket_gapfill(4, time,-5,13), locf(avg(v)::int,treat_null_as_missing:=true) FROM (VALUES (0,0),(8,NULL)) v(time, v) WHERE time BETWEEN 0 AND 10 GROUP BY 1) i ORDER BY 1 DESC;
time_bucket_gapfill | locf
---------------------+------
12 | 0
8 | 0
4 | 0
0 | 0
-4 |
-8 |
(6 rows)
Scheduled jobs stop running
Your scheduled jobs might stop running for various reasons. On self-hosted TimescaleDB, you can fix this by restarting background workers:
SELECT _timescaledb_internal.start_background_workers();
On Timescale Cloud and Managed Service for TimescaleDB, restart background workers by doing one of the following:
- Run
SELECT timescaledb_pre_restore()
, followed bySELECT timescaledb_post_restore()
. - Power the service off and on again. This might cause a downtime of a few minutes while the service restores from backup and replays the write-ahead log.