Database bloat occurs in heap tables, append-optimized tables, indexes, and system catalogs and affects database performance and disk usage. You can detect database bloat and remove it from the database.
- About Bloat
- Detecting Bloat
- Removing Bloat from Database Tables
- Removing Bloat from Append-Optimized Tables
- Removing Bloat from Indexes
- Removing Bloat from System Catalogs
About Bloat
Database bloat is disk space that was used by a table or index and is available for reuse by the database but has not been reclaimed. Bloat is created when updating tables or indexes.
Because Greenplum Database heap tables use the PostgreSQL Multiversion Concurrency Control (MVCC) storage implementation, a deleted or updated row is logically deleted from the database, but a non-visible image of the row remains in the table. These deleted rows, also called expired rows, are tracked in a free space map. Running VACUUM
marks the expired rows as free space that is available for reuse by subsequent inserts.
It is normal for tables that have frequent updates to have a small or moderate amount of expired rows and free space that will be reused as new data is added. But when the table is allowed to grow so large that active data occupies just a small fraction of the space, the table has become significantly bloated. Bloated tables require more disk storage and additional I/O that can slow down query execution.
Important
It is very important to run VACUUM
on individual tables after large UPDATE
and DELETE
operations to avoid the necessity of ever running VACUUM FULL
.
Running the VACUUM
command regularly on tables prevents them from growing too large. If the table does become significantly bloated, the VACUUM FULL
command must be used to compact the table data.
If the free space map is not large enough to accommodate all of the expired rows, the VACUUM
command is unable to reclaim space for expired rows that overflowed the free space map. The disk space may only be recovered by running VACUUM FULL
, which locks the table, creates a new table, copies the table data to the new table, and then drops old table. This is an expensive operation that can take an exceptional amount of time to complete with a large table.
Warning: VACUUM FULL
acquires an ACCESS EXCLUSIVE
lock on tables. You should not run VACUUM FULL
. If you run VACUUM FULL
on tables, run it during a time when users and applications do not require access to the tables, such as during a time of low activity, or during a maintenance window.
Detecting Bloat
The statistics collected by the ANALYZE
statement can be used to calculate the expected number of disk pages required to store a table. The difference between the expected number of pages and the actual number of pages is a measure of bloat. The gp_toolkit
schema provides the gp_bloat_diag view that identifies table bloat by comparing the ratio of expected to actual pages. To use it, make sure statistics are up to date for all of the tables in the database, then run the following SQL:
gpadmin=# SELECT * FROM gp_toolkit.gp_bloat_diag;
bdirelid | bdinspname | bdirelname | bdirelpages | bdiexppages | bdidiag
----------+------------+------------+-------------+-------------+---------------------------------------
21488 | public | t1 | 97 | 1 | significant amount of bloat suspected
(1 row)
The results include only tables with moderate or significant bloat. Moderate bloat is reported when the ratio of actual to expected pages is greater than four and less than ten. Significant bloat is reported when the ratio is greater than ten.
The gp_toolkit.gp_bloat_expected_pages
view lists the actual number of used pages and expected number of used pages for each database object.
gpadmin=# SELECT * FROM gp_toolkit.gp_bloat_expected_pages LIMIT 5;
btdrelid | btdrelpages | btdexppages
----------+-------------+-------------
10789 | 1 | 1
10794 | 1 | 1
10799 | 1 | 1
5004 | 1 | 1
7175 | 1 | 1
(5 rows)
The btdrelid
is the object ID of the table. The btdrelpages
column reports the number of pages the table uses; the btdexppages
column is the number of pages expected. Again, the numbers reported are based on the table statistics, so be sure to run ANALYZE
on tables that have changed.
Removing Bloat from Database Tables
The VACUUM
command adds expired rows to the free space map so that the space can be reused. When VACUUM
is run regularly on a table that is frequently updated, the space occupied by the expired rows can be promptly reused, preventing the table file from growing larger. It is also important to run VACUUM
before the free space map is filled. For heavily updated tables, you may need to run VACUUM
at least once a day to prevent the table from becoming bloated.
Warning: When a table is significantly bloated, it is better to run VACUUM
before running ANALYZE
. Analyzing a severely bloated table can generate poor statistics if the sample contains empty pages, so it is good practice to vacuum a bloated table before analyzing it.
When a table accumulates significant bloat, running the VACUUM
command is insufficient. For small tables, running VACUUM FULL <table_name>
can reclaim space used by rows that overflowed the free space map and reduce the size of the table file. However, a VACUUM FULL
statement is an expensive operation that requires an ACCESS EXCLUSIVE
lock and may take an exceptionally long and unpredictable amount of time to finish for large tables. You should run VACUUM FULL
on tables during a time when users and applications do not require access to the tables being vacuumed, such as during a time of low activity, or during a maintenance window.
Removing Bloat from Append-Optimized Tables
Append-optimized tables are handled much differently than heap tables. Although append-optimized tables allow update, insert, and delete operations, these operations are not optimized and are not recommended with append-optimized tables. If you heed this advice and use append-optimized for load-once/read-many workloads, VACUUM
on an append-optimized table runs almost instantaneously.
If you do run UPDATE
or DELETE
commands on an append-optimized table, expired rows are tracked in an auxiliary bitmap instead of the free space map. VACUUM
is the only way to recover the space. Running VACUUM
on an append-optimized table with expired rows compacts a table by rewriting the entire table without the expired rows. However, no action is performed if the percentage of expired rows in the table exceeds the value of the gp_appendonly_compaction_threshold
configuration parameter, which is 10 (10%) by default. The threshold is checked on each segment, so it is possible that a VACUUM
statement will compact an append-only table on some segments and not others. Compacting append-only tables can be deactivated by setting the gp_appendonly_compaction
parameter to no
.
Removing Bloat from Indexes
The VACUUM
command only recovers space from tables. To recover the space from indexes, recreate them using the REINDEX
command.
To rebuild all indexes on a table run REINDEX *table_name*;
. To rebuild a particular index, run REINDEX *index_name*;
. REINDEX
sets the reltuples
and relpages
to 0 (zero) for the index, To update those statistics, run ANALYZE
on the table after reindexing.
Removing Bloat from System Catalogs
Greenplum Database system catalog tables are heap tables and can become bloated over time. As database objects are created, altered, or dropped, expired rows are left in the system catalogs. Using gpload
to load data contributes to the bloat since gpload
creates and drops external tables. (Rather than use gpload
, it is recommended to use gpfdist
to load data.)
Bloat in the system catalogs increases the time require to scan the tables, for example, when creating explain plans. System catalogs are scanned frequently and if they become bloated, overall system performance is degraded.
It is recommended to run VACUUM
on system catalog tables nightly and at least weekly. At the same time, running REINDEX SYSTEM
on system catalog tables removes bloat from the indexes. Alternatively, you can reindex system tables using the reindexdb
utility with the -s
(--system
) option. After removing catalog bloat, run ANALYZE
to update catalog table statistics.
These are Greenplum Database system catalog maintenance steps.
Perform a
REINDEX
on the system catalog tables to rebuild the system catalog indexes. This removes bloat in the indexes and improvesVACUUM
performance.Note
When performing
REINDEX
on the system catalog tables, locking will occur on the tables and might have an impact on currently running queries. You can schedule theREINDEX
operation during a period of low activity to avoid disrupting ongoing business operations.Perform a
VACUUM
on system catalog tables.Perform an
ANALYZE
on the system catalog tables to update the table statistics.
If you are performing system catalog maintenance during a maintenance period and you need to stop a process due to time constraints, run the Greenplum Database function pg_cancel_backend(<PID>)
to safely stop a Greenplum Database process.
The following script runs REINDEX
, VACUUM
, and ANALYZE
on the system catalogs.
#!/bin/bash
DBNAME="<database_name>"
SYSTABLES="' pg_catalog.' || relname || ';' from pg_class a, pg_namespace b \
where a.relnamespace=b.oid and b.nspname='pg_catalog' and a.relkind='r'"
reindexdb -s -d $DBNAME
psql -tc "SELECT 'VACUUM' || $SYSTABLES" $DBNAME | psql -a $DBNAME
analyzedb -a -s pg_catalog -d $DBNAME
If the system catalogs become significantly bloated, you must run VACUUM FULL
during a scheduled downtime period. During this period, stop all catalog activity on the system; VACUUM FULL
takes ACCESS EXCLUSIVE
locks against the system catalog. Running VACUUM
regularly on system catalog tables can prevent the need for this more costly procedure.
These are steps for intensive system catalog maintenance.
- Stop all catalog activity on the Greenplum Database system.
- Perform a
VACUUM FULL
on the system catalog tables. See the following Note. - Perform an
ANALYZE
on the system catalog tables to update the catalog table statistics.
Note
The system catalog table
pg_attribute
is usually the largest catalog table. If thepg_attribute
table is significantly bloated, aVACUUM FULL
operation on the table might require a significant amount of time and might need to be performed separately. The presence of both of these conditions indicate a significantly bloatedpg_attribute
table that might require a longVACUUM FULL
time:
- The
pg_attribute
table contains a large number of records. - The diagnostic message for
pg_attribute
issignificant amount of bloat
in thegp_toolkit.gp_bloat_diag
view.
Parent topic: System Monitoring and Maintenance