性能增强
大量连接高并发优化
- 场景: SaaS场景,微服务架构下的中心库场景
- 业务特点:客户端多,在线用户多,数据库并发连接非常多
- 价值: 比连接池网络少1跳, 性能更好, 支持绑定变量等连接池会话模式不支持的全部功能
索引增强
- 缓解高频更新负载下的btree索引膨胀
- 场景: 数据频繁更新,如游戏、交易、共享出行、IoT等行业
- 价值: 减少膨胀, 降低存储和内存使用率, 提高效率
- 支持sort接口, 大幅提升Create GiST和SP-GiST索引的速度
- 场景: 地理信息、物联网、共享出行等使用了地理信息、JSON、range等数据类型的业务
- 价值: GiST和SP-GiST索引创建速度提升一个数量级
- 支持SP-GiST覆盖索引功能满足任意维度聚集存储
- 场景: 地理信息、物联网、共享出行等使用了地理信息、JSON、range等数据类型的业务
- 价值: 将value集成到索引叶子结点, 减少block扫描, 提高基于空间、JSON、range的范围搜索的响应速度
- DEMO:
create index idx_tbl_1 on tbl using spgist (gis) include (col1,col2); -- 叶子结点加入col1,col2内容
select col1,col2,gis from tbl where gis xx; -- VM clearly heap page 可以不需要回表, 提高性能
- BRIN索引支持布隆过滤和 multi range
- 场景: 时序类场景,如IoT行业
- 价值: 提高BRIN索引的过滤精度, 减少返回的block数,通过布隆过滤器支持任意字段组合条件过滤
并行计算增强
- 并行顺序扫描支持chunk
- 场景: 数据分析类业务
- 价值: 大IO利用prefetch能力大幅提升顺序IO扫描吞吐性能, 解决小IO无法打满块设备吞吐指标的问题。提高大范围数据扫描的IO吞吐, 逼近块设备IO吞吐极限
- PL/pgSQL RETURN QUERY支持并行计算
- 场景: 数据分析类业务
- 价值: 在PL/pgSQL函数内使用return query返回结果时支持query的并行计算来提升性能
- DEMO:
create or replace function xx.... return query select xx from xx ... -- 这里到select xx query可以支持并行计算
- 刷新物化事务支持并行计算
- 场景: 预计算场景,如数据分析类业务
- 价值: 物化视图的刷新支持并行计算, 大幅度提高刷新速度
- DEMO:
REFRESH MATERIALIZED VIEW [ CONCURRENTLY ] name -- 支持并行计算
内置sharding功能接口,postgres_fdw 持续增强
- 支持外部表异步并行计算
- 场景: sharding分库分表的数据分析场景, 多个实例或sharding分片的数据需要并行分析的场景
- 价值: 提高多个实例并行计算的效率
- DEMO:
create foreign table ft1 AT Server1;
...
create foreign table ftn AT ServerN;
create partition table tbl partition by ....
alter table ft1 attach to tbl ...
...
alter table ftn attach to tbl ...
select count(*),avg(x),max(x)... from tbl group by xxxx -- 支持N个Server同时计算.
- 远程分区表的子分区可以import foreign schema生成外部表
- 场景: sharding分库分表
- 价值: 简化一键生成shard的步骤, 同时支持按分区生成shard
- DEMO:
IMPORT FOREIGN SCHEMA remote_schema
[ { LIMIT TO | EXCEPT } ( table_name [, ...] ) ] table_name -- 支持子分区.
- 支持truncate外部表
- 场景: sharding分库分表
- 价值: 支持更丰富的语法, truncate清理远程不产生redo, 更快
- DEMO:
truncate 外部表;
- 支持会话级持久化foreign server连接
- 场景: sharding 分库分表
- 价值: 提高sharding场景的OLTP性能 (以往的版本每一个远程事务都需要重新创建一次远程库连接, 无法支持OLTP高频小事务)
分区表性能趋近完美
- 分区裁剪能力提升减少子分区subplan和重复的cached plans
- 场景: 大数据量, 使用了分区表, 分区表高频操作, OLTP类业务场景
- 价值: 减少内存使用, 同时大幅提升涉及少量分区的SQL性能
- 增减分区时使用alter table detach|attach PARTITION concurrently模式完全避免锁冲突
- 场景: 核心在线业务
- 价值: 绑定和解绑分区支持无锁操作, 消除加减分区对在线业务的影响
DEMO:
ALTER TABLE [ IF EXISTS ] name
DETACH PARTITION partition_name [ CONCURRENTLY | FINALIZE ]
大表 search IN ( consts ) 优化
- 场景: SaaS业务, 电商, 社交等场景
- 业务特点:IN的输入条件很多导致IN过滤慢
- 价值: 支持 linear search TO hash table probe,提高IN语句的处理性能, 在条件多、被过滤掉的数据量大的SQL中性能提升非常明显
- DEMO:
select x from tbl where id in (1,2,3,4,5,6,7,8,9);
TOAST 支持 lz4 压缩算法
- 场景: 通用业务
- 价值: 提高压缩性能
- DEMO:
CREATE TABLE cmdata(f1 text COMPRESSION pglz);
CREATE INDEX idx ON cmdata(f1);
INSERT INTO cmdata VALUES(repeat('1234567890', 1000));
\d+ cmdata
Table "public.cmdata"
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+------+-----------+----------+---------+----------+-------------+--------------+-------------
f1 | text | | | | extended | pglz | |
Indexes:
"idx" btree (f1)
CREATE TABLE cmdata1(f1 TEXT COMPRESSION lz4);
INSERT INTO cmdata1 VALUES(repeat('1234567890', 1004));
\d+ cmdata1
Table "public.cmdata1"
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+------+-----------+----------+---------+----------+-------------+--------------+-------------
f1 | text | | | | extended | lz4 | |
注意:如果要使用 lz4 压缩,则 PostgreSQL 编译时需要带上--with-lz4
选项
引入管道查询模式
- 场景: 时序、IoT类、日志appendonly类业务
- 业务特点:跨网段、跨公网的业务与数据库网络访问延时较高的场景, 同时有较大的写入吞吐需求
- 价值: 提高网络环境较差的数据写入吞吐, 可以达到接近本地网络的数据写入吞吐性能。在高延迟网络连接,或是有频繁更新操作(INSERT/UPDATE/DELETE)的负载场景下,大幅提升系统性能
安全增强
新增 pg_read_all_data 和 pg_write_all_data 角色
- 场景: 通用场景
- 价值: 支持一次性赋予tables/views/sequences对象的读、写权限,支持只读角色, 只写角色,提高DBA管理效率
- DEMO:
postgres=# select * from pg_roles ;
rolname | rolsuper | rolinherit | rolcreaterole | rolcreatedb | rolcanlogin | rolreplication | rolconnlimit | rolpassword | rolvaliduntil | rolbypassrls | rolconfig | oid
---------------------------+----------+------------+---------------+-------------+-------------+----------------+--------------+-------------+---------------+--------------+-----------+------
postgres | t | t | t | t | t | t | -1 | ******** | | t | | 10
pg_database_owner | f | t | f | f | f | f | -1 | ******** | | f | | 6171
pg_read_all_data | f | t | f | f | f | f | -1 | ******** | | f | | 6181
pg_write_all_data | f | t | f | f | f | f | -1 | ******** | | f | | 6182
pg_monitor | f | t | f | f | f | f | -1 | ******** | | f | | 3373
pg_read_all_settings | f | t | f | f | f | f | -1 | ******** | | f | | 3374
pg_read_all_stats | f | t | f | f | f | f | -1 | ******** | | f | | 3375
pg_stat_scan_tables | f | t | f | f | f | f | -1 | ******** | | f | | 3377
pg_read_server_files | f | t | f | f | f | f | -1 | ******** | | f | | 4569
pg_write_server_files | f | t | f | f | f | f | -1 | ******** | | f | | 4570
pg_execute_server_program | f | t | f | f | f | f | -1 | ******** | | f | | 4571
pg_signal_backend | f | t | f | f | f | f | -1 | ******** | | f | | 4200
(12 rows)
-- 创建全局只读,全局只写用户
postgres=# create user global_read_only password '******';
CREATE ROLE
postgres=# create user global_write_only password '******';
CREATE ROLE
-- 授权
postgres=# grant pg_read_all_data to global_read_only ;
GRANT ROLE
postgres=# grant pg_write_all_data to global_write_only ;
GRANT ROLE
-- 全局只读用户只有只读权限
postgres=# \c - global_read_only
Password for user global_read_only:
psql (12.4, server 14.1)
WARNING: psql major version 12, server major version 14.
Some psql features might not work.
You are now connected to database "postgres" as user "global_read_only".
postgres=> select * from employees limit 1;
employee_id | full_name | manager_id
-------------+---------------+------------
1 | Michael North |
(1 row)
postgres=> insert into employees (employee_id, full_name, manager_id) values(22, 'xgq', 2);
ERROR: permission denied for table employees
-- 全局只写用户只有只写权限
postgres=> \c - global_write_only ;
Password for user global_write_only:
psql (12.4, server 14.1)
WARNING: psql major version 12, server major version 14.
Some psql features might not work.
You are now connected to database "postgres" as user "global_write_only".
postgres=> select * from employees limit 1;
ERROR: permission denied for table employees
postgres=> insert into employees (employee_id, full_name, manager_id) values(22, 'xgq', 2);
INSERT 0 1
默认使用 SCRAM-SHA-256 认证方法
- 场景: 通用场景
- 价值: 完全规避md5可能的密钥泄露和协议攻击问题,提高认证方式和密码认证安全性
- DEMO:
postgres=> select * from pg_settings where name='password_encryption';
-[ RECORD 1 ]---+------------------------------------------------
name | password_encryption
setting | md5
unit |
category | Connections and Authentication / Authentication
short_desc | Chooses the algorithm for encrypting passwords.
extra_desc |
context | user
vartype | enum
source | configuration file
min_val |
max_val |
enumvals | {md5,scram-sha-256}
boot_val | scram-sha-256
reset_val | md5
sourcefile |
sourceline |
pending_restart | f
注意:较旧的客户端库不支持 SCRAM-SHA-256
认证,会有类似报错psycopg2.OperationalError: authentication method 10 not supported
,升级客户端库到最新版本即可支持
数据类型和SQL
支持multi range类型, 兼容range类型已知的所有操作符和索引
- 场景: 物联网
- 价值: 存储传感器的指标波动范围, 允许在一个value里面存储多个范围区间,同时支持范围的包含、相交、左、右、相邻等逻辑查询(支持索引加速)
- DEMO:
SELECT '{}'::int4multirange;
SELECT '{[3,7)}'::int4multirange;
SELECT '{[3,7), [8,9)}'::int4multirange;
支持JSONB下标语法
- 场景: 通用场景
- 价值: 大幅度简化JSON类型的使用, 支持set原子操作
- DEMO:
select ('{"postgres": {"release": "Alibaba Cloud RDS PG 14"}}'::jsonb)['postgres']['release'];
jsonb
---------------------------
"Alibaba Cloud RDS PG 14"
(1 row)
支持存储过程OUT参数
- 场景: 通用场景
- 价值: 支持存储过程返回VALUE
- DEMO:
-- 表结构
postgres=> \d range_test
Table "public.range_test"
Column | Type | Collation | Nullable | Default
-----------+---------+-----------+----------+----------------------------------------
id | integer | | not null | nextval('range_test_id_seq'::regclass)
date_time | tsrange | | |
Indexes:
"range_test_pkey" PRIMARY KEY, btree (id)
-- 表数据
postgres=> select * from range_test;
id | date_time
----+-----------------------------------------------
1 | ["2010-01-01 17:00:00","2010-01-01 18:00:00")
2 | ["2010-01-01 15:00:00","2010-01-01 16:00:00")
(2 rows)
-- 定义存储过程,o_date_time 使用 out 类型
postgres=> create or replace procedure p1 ( i_id in int, o_date_time out tsrange )
as $$
declare
begin
select date_time
into o_date_time
from range_test where id=i_id;
end;
$$ language plpgsql;
CREATE PROCEDURE
-- 调用p1存储过程通过n返回数据
postgres=> do
$$
declare
n tsrange;
begin
call p1(1, n);
raise notice '%', n;
end;
$$;
NOTICE: ["2010-01-01 17:00:00","2010-01-01 18:00:00")
DO
递归(CTE)图式搜索增加广度优先、深度优先语法和循环语法
- 场景: 社交、风控、图式数据、图谱等场景
- 价值: 简化广度优先、深度优先语法, 增加循环检测的SQL写法
- DEMO:
-- 创建表
CREATE TABLE employees (
employee_id serial PRIMARY KEY,
full_name VARCHAR NOT NULL,
manager_id INT
);
-- 插入数据
INSERT INTO employees (
employee_id,
full_name,
manager_id
)
VALUES
(1, 'Michael North', NULL),
(2, 'Megan Berry', 1),
(3, 'Sarah Berry', 1),
(4, 'Zoe Black', 1),
(5, 'Tim James', 1),
(6, 'Bella Tucker', 2),
(7, 'Ryan Metcalfe', 2),
(8, 'Max Mills', 2),
(9, 'Benjamin Glover', 2),
(10, 'Carolyn Henderson', 3),
(11, 'Nicola Kelly', 3),
(12, 'Alexandra Climo', 3),
(13, 'Dominic King', 3),
(14, 'Leonard Gray', 4),
(15, 'Eric Rampling', 4),
(16, 'Piers Paige', 7),
(17, 'Ryan Henderson', 7),
(18, 'Frank Tucker', 8),
(19, 'Nathan Ferguson', 8),
(20, 'Kevin Rampling', 8);
-- 深度优先
WITH RECURSIVE subordinates(employee_id, manager_id, full_name) AS (
SELECT
employee_id,
manager_id,
full_name
FROM
employees
WHERE
employee_id = 2
UNION
SELECT
e.employee_id,
e.manager_id,
e.full_name
FROM
employees e, subordinates s
WHERE
s.employee_id = e.manager_id
) SEARCH DEPTH FIRST BY employee_id SET ordercol
SELECT * FROM subordinates;
-- 广度优先
WITH RECURSIVE subordinates(employee_id, manager_id, full_name) AS (
SELECT
employee_id,
manager_id,
full_name
FROM
employees
WHERE
employee_id = 2
UNION
SELECT
e.employee_id,
e.manager_id,
e.full_name
FROM
employees e, subordinates s
WHERE
s.employee_id = e.manager_id
) SEARCH BREADTH FIRST BY employee_id SET ordercol
SELECT * FROM subordinates;
-- 循环语法
WITH RECURSIVE subordinates(employee_id, manager_id, full_name) AS (
SELECT
employee_id,
manager_id,
full_name
FROM
employees
WHERE
employee_id = 2
UNION
SELECT
e.employee_id,
e.manager_id,
e.full_name
FROM
employees e, subordinates s
WHERE
s.employee_id = e.manager_id
) CYCLE employee_id SET is_cycle USING path
SELECT * FROM subordinates;
增加date_bin函数
- 场景: 时序场景、分析场景
- 价值: 按时间间隔打点聚合, 简化打点的算法。支持任意起点, 按任意interval切分bucket, 输入一个时间戳返回这个时间戳所在的bucket timestamp
- DEMO:
语法:
date_bin(stride, source, origin)
SELECT date_bin('15 minutes', TIMESTAMP '2020-02-11 15:44:17', TIMESTAMP '2001-01-01');
Result: 2020-02-11 15:30:00
SELECT date_bin('15 minutes', TIMESTAMP '2020-02-11 15:44:17', TIMESTAMP '2001-01-01 00:02:30');
Result: 2020-02-11 15:32:30
支持tid range scan扫描方法
- 场景: 通用场景, 高并发场景, 大数据吞吐计算场景
- 价值: 增加了一种新的搜索算法, 按HEAP表的物理存储数据块ctid地址进行搜索。支持自定义业务层的并行数据扫描逻辑、随机数据扫描逻辑,允许用户输入需要扫描指定数据块的范围。结合rr snapshot export支持一致性单表并行导出,全表并行无锁冲突的更新。
- DEMO:
select e.*, e.ctid from employees e;
employee_id | full_name | manager_id | ctid
-------------+-------------------+------------+--------
1 | Michael North | | (0,1)
2 | Megan Berry | 1 | (0,2)
3 | Sarah Berry | 1 | (0,3)
4 | Zoe Black | 1 | (0,4)
5 | Tim James | 1 | (0,5)
6 | Bella Tucker | 2 | (0,6)
7 | Ryan Metcalfe | 2 | (0,7)
8 | Max Mills | 2 | (0,8)
9 | Benjamin Glover | 2 | (0,9)
10 | Carolyn Henderson | 3 | (0,10)
11 | Nicola Kelly | 3 | (0,11)
12 | Alexandra Climo | 3 | (0,12)
13 | Dominic King | 3 | (0,13)
14 | Leonard Gray | 4 | (0,14)
15 | Eric Rampling | 4 | (0,15)
16 | Piers Paige | 7 | (0,16)
17 | Ryan Henderson | 7 | (0,17)
18 | Frank Tucker | 8 | (0,18)
19 | Nathan Ferguson | 8 | (0,19)
20 | Kevin Rampling | 8 | (0,20)
(20 rows)
select * from employees where ctid > '(0,10)' and ctid < '(0,20)';
employee_id | full_name | manager_id
-------------+-----------------+------------
11 | Nicola Kelly | 3
12 | Alexandra Climo | 3
13 | Dominic King | 3
14 | Leonard Gray | 4
15 | Eric Rampling | 4
16 | Piers Paige | 7
17 | Ryan Henderson | 7
18 | Frank Tucker | 8
19 | Nathan Ferguson | 8
(9 rows)
数据库管理
垃圾回收增强
- concurrently模式创建索引不会引起垃圾膨胀
- 场景: OLTP业务
- 价值: 降低膨胀概率
- DEMO:
-- 以下操作不管持续多久, 不影响vacuum回收这之间产生的垃圾
create index CONCURRENTLY abc ...
reindex CONCURRENTLY xxx ...
- 加速vacuum freeze, 降低xid wrapped风险
- 场景: OLTP业务
- 价值: 增加一个全速vacuum模式,在触发后会忽略索引和sleep参数执行vacuum以最快速度完成vacuum freeze
- DEMO:
select * from pg_settings where name ~ 'failsafe';
-[ RECORD 1 ]---+------------------------------------------------------------------------------------
name | vacuum_failsafe_age
setting | 1600000000
unit |
category | Client Connection Defaults / Statement Behavior
short_desc | Age at which VACUUM should trigger failsafe to avoid a wraparound outage.
extra_desc |
context | user
vartype | integer
source | default
min_val | 0
max_val | 2100000000
enumvals |
boot_val | 1600000000
reset_val | 1600000000
sourcefile |
sourceline |
pending_restart | f
-[ RECORD 2 ]---+------------------------------------------------------------------------------------
name | vacuum_multixact_failsafe_age
setting | 1600000000
unit |
category | Client Connection Defaults / Statement Behavior
short_desc | Multixact age at which VACUUM should trigger failsafe to avoid a wraparound outage.
extra_desc |
context | user
vartype | integer
source | default
min_val | 0
max_val | 2100000000
enumvals |
boot_val | 1600000000
reset_val | 1600000000
sourcefile |
sourceline |
pending_restart | f
- 新增pg_stat_progress_copy视图 , 支持COPY导入数据进度监控 , 导入多少行, 排除多少行(where filter)
- 场景: DBA效能
- 价值: 掌握copy倒入进度和速度
- DEMO:
postgres=> \d pg_stat_progress_copy
View "pg_catalog.pg_stat_progress_copy"
Column | Type | Collation | Nullable | Default
------------------+---------+-----------+----------+---------
pid | integer | | |
datid | oid | | |
datname | name | | |
relid | oid | | |
command | text | | |
type | text | | |
bytes_processed | bigint | | |
bytes_total | bigint | | |
tuples_processed | bigint | | |
tuples_excluded | bigint | | |
- 新增 replication slot 统计信息视图 - pg_stat_wal
- 场景: DBA效能
- 价值: 掌握数据库WAL日志相关的统计信息
- DEMO:
postgres=> \d pg_stat_wal
View "pg_catalog.pg_stat_wal"
Column | Type | Collation | Nullable | Default
------------------+--------------------------+-----------+----------+---------
wal_records | bigint | | |
wal_fpi | bigint | | |
wal_bytes | numeric | | |
wal_buffers_full | bigint | | |
wal_write | bigint | | |
wal_sync | bigint | | |
wal_write_time | double precision | | |
wal_sync_time | double precision | | |
stats_reset | timestamp with time zone | | |
- 新增 replication slot 统计信息视图 - pg_stat_replication_slots
- 场景: DBA效能
- 价值: 掌握每个SLOT的统计信息
- DEMO:
postgres=> \d pg_stat_replication_slots
View "pg_catalog.pg_stat_replication_slots"
Column | Type | Collation | Nullable | Default
--------------+--------------------------+-----------+----------+---------
slot_name | text | | |
spill_txns | bigint | | |
spill_count | bigint | | |
spill_bytes | bigint | | |
stream_txns | bigint | | |
stream_count | bigint | | |
stream_bytes | bigint | | |
total_txns | bigint | | |
total_bytes | bigint | | |
stats_reset | timestamp with time zone | | |
- pg_locks 增加 wait_start 字段
- 场景: DBA效能
- 价值: 跟踪锁等待开始时间,掌握更多锁等待细节, 例如等待时长
- DEMO:
postgres=> \d pg_locks
View "pg_catalog.pg_locks"
Column | Type | Collation | Nullable | Default
--------------------+--------------------------+-----------+----------+---------
locktype | text | | |
database | oid | | |
relation | oid | | |
page | integer | | |
tuple | smallint | | |
virtualxid | text | | |
transactionid | xid | | |
classid | oid | | |
objid | oid | | |
objsubid | smallint | | |
virtualtransaction | text | | |
pid | integer | | |
mode | text | | |
granted | boolean | | |
fastpath | boolean | | |
waitstart | timestamp with time zone | | |
- pg_stat_database 多个统计指标
- 场景: DBA效能
- 价值: 增加 active_time, idle_in_transaction_time, sessions, sessions_abandoned, sessions_fatal, sessions_killed统计指标,掌握更多数据库级别的活跃时间、空闲时间、会话量等统计信息, 了解数据库是否达到瓶颈
- DEMO:
postgres=> \d pg_stat_database
View "pg_catalog.pg_stat_database"
Column | Type | Collation | Nullable | Default
--------------------------+--------------------------+-----------+----------+---------
datid | oid | | |
datname | name | | |
numbackends | integer | | |
xact_commit | bigint | | |
xact_rollback | bigint | | |
blks_read | bigint | | |
blks_hit | bigint | | |
tup_returned | bigint | | |
tup_fetched | bigint | | |
tup_inserted | bigint | | |
tup_updated | bigint | | |
tup_deleted | bigint | | |
conflicts | bigint | | |
temp_files | bigint | | |
temp_bytes | bigint | | |
deadlocks | bigint | | |
checksum_failures | bigint | | |
checksum_last_failure | timestamp with time zone | | |
blk_read_time | double precision | | |
blk_write_time | double precision | | |
session_time | double precision | | |
active_time | double precision | | |
idle_in_transaction_time | double precision | | |
sessions | bigint | | |
sessions_abandoned | bigint | | |
sessions_fatal | bigint | | |
sessions_killed | bigint | | |
stats_reset | timestamp with time zone | | |
- pg_prepared_statements 增加硬解析和软解析次数统计
- 场景: DBA效能
- 价值: 掌握SQL的软解、硬解次数, 分析SQL解析层消耗和优化方法
- DEMO:
postgres=> \d pg_prepared_statements
View "pg_catalog.pg_prepared_statements"
Column | Type | Collation | Nullable | Default
-----------------+--------------------------+-----------+----------+---------
name | text | | |
statement | text | | |
prepare_time | timestamp with time zone | | |
parameter_types | regtype[] | | |
from_sql | boolean | | |
generic_plans | bigint | | |
custom_plans | bigint | | |
新增 GUC 参数
- 增加 log_recovery_conflict_waits GUC参数
- 场景: DBA效能
- 价值: 支持 standby query&startup process conflict 恢复冲突超时(deadlock_timeout) 日志打印,掌握只读standby库的查询和WAL恢复进程的冲突等待时间
- DEMO:
postgres=> select * from pg_settings where name='log_recovery_conflict_waits';
-[ RECORD 1 ]---+--------------------------------------
name | log_recovery_conflict_waits
setting | off
unit |
category | Reporting and Logging / What to Log
short_desc | Logs standby recovery conflict waits.
extra_desc |
context | sighup
vartype | bool
source | default
min_val |
max_val |
enumvals |
boot_val | off
reset_val | off
sourcefile |
sourceline |
pending_restart | f
- 增加 track_wal_io_timing GUC参数
- 场景: DBA效能
- 价值: 支持wal日志buffer write, fsync IO等待时长统计 , 掌握IO资源使用情况和WAL瓶颈分析
- DEMO:
postgres=> select * from pg_settings where name='track_wal_io_timing';
-[ RECORD 1 ]---+--------------------------------------------------
name | track_wal_io_timing
setting | off
unit |
category | Statistics / Query and Index Statistics Collector
short_desc | Collects timing statistics for WAL I/O activity.
extra_desc |
context | superuser
vartype | bool
source | default
min_val |
max_val |
enumvals |
boot_val | off
reset_val | off
sourcefile |
sourceline |
pending_restart | f
- 增加 idle_session_timeout GUC参数
- 场景: DBA效能
- 价值: 断开长时间空闲的会话,减少空闲会话的资源占用
- DEMO:
postgres=> select * from pg_settings where name='idle_session_timeout';
-[ RECORD 1 ]---+-------------------------------------------------------------------------------
name | idle_session_timeout
setting | 0
unit | ms
category | Client Connection Defaults / Statement Behavior
short_desc | Sets the maximum allowed idle time between queries, when not in a transaction.
extra_desc | A value of 0 turns off the timeout.
context | user
vartype | integer
source | default
min_val | 0
max_val | 2147483647
enumvals |
boot_val | 0
reset_val | 0
sourcefile |
sourceline |
pending_restart | f
- 增加 client_connection_check_interval GUC参数
- 场景: 分析师、开发者、DBA效能
- 价值: 协议层支持心跳包, 如果客户端已离线, 可以快速中断这个客户端此前运行中的长SQL - Detect POLLHUP/POLLRDHUP while running queries,不需要等执行结束, 检测到客户端退出后SQL即刻退出
- DEMO:
postgres=> select * from pg_settings where name='client_connection_check_interval';
-[ RECORD 1 ]---+-------------------------------------------------------------------------------
name | client_connection_check_interval
setting | 0
unit | ms
category | Connections and Authentication / Connection Settings
short_desc | Sets the time interval between checks for disconnection while running queries.
extra_desc |
context | user
vartype | integer
source | configuration file
min_val | 0
max_val | 2147483647
enumvals |
boot_val | 0
reset_val | 0
sourcefile |
sourceline |
pending_restart | f
SQL 命令增强
- REINDEX command 增加 tablespace 选项
- 场景: DBA效能
- 价值: 支持重建索引到指定表空间,更好的利用块设备
- DEMO:
REINDEX [ ( option [, ...] ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } [ CONCURRENTLY ] name
where option can be one of:
CONCURRENTLY [ boolean ]
TABLESPACE new_tablespace
VERBOSE [ boolean ]
- REINDEX command 支持分区表
- 场景: DBA效能
- 价值: 自动重建所有子分区的索引,提高分区表索引重建的便捷度
流复制与备份恢复
逻辑复制增强
- 长事务逻辑复制优化
- 场景: 多机房部署、单元化部署、跨库同步订阅数据场景
- 价值: 增加streaming接口, 逻辑复制支持流式decoder和发送, 无需等待事务结束, 大幅度降低大事务、长事务的复制延迟
- 逻辑复制sync table data阶段支持多线程
- 场景: 跨库逻辑订阅、数据迁移等场景
- 价值: 允许同步全量数据的同时接收wal逻辑日志, 可以缩短大表首次同步到达最终一致的耗时
- alter subscription语法增强
- 场景: 逻辑订阅
- 价值: 支持add/drop publication,提高订阅端的操作便捷性
- DEMO:
ALTER SUBSCRIPTION name CONNECTION 'conninfo'
ALTER SUBSCRIPTION name SET PUBLICATION publication_name [, ...] [ WITH ( publication_option [= value] [, ... ] ) ]
ALTER SUBSCRIPTION name ADD PUBLICATION publication_name [, ...] [ WITH ( publication_option [= value] [, ... ] ) ]
ALTER SUBSCRIPTION name DROP PUBLICATION publication_name [, ...] [ WITH ( publication_option [= value] [, ... ] ) ]
ALTER SUBSCRIPTION name REFRESH PUBLICATION [ WITH ( refresh_option [= value] [, ... ] ) ]
ALTER SUBSCRIPTION name ENABLE
ALTER SUBSCRIPTION name DISABLE
ALTER SUBSCRIPTION name SET ( subscription_parameter [= value] [, ... ] )
ALTER SUBSCRIPTION name OWNER TO { new_owner | CURRENT_ROLE | CURRENT_USER | SESSION_USER }
ALTER SUBSCRIPTION name RENAME TO new_name
允许hot standby 作为pg_rewind的源库
- 场景: 通用场景
- 价值: 可以使用standby来修复坏库
增加 remove_temp_files_after_crash GUC参数
- 场景: 通用场景
- 价值: 在数据库crash后重启时自动清理临时文件,避免数据库遇到连续性崩溃恢复问题时可能导致存储爆满的问题
- DEMO:
postgres=> select * from pg_settings where name='remove_temp_files_after_crash';
-[ RECORD 1 ]---+--------------------------------------------
name | remove_temp_files_after_crash
setting | on
unit |
category | Developer Options
short_desc | Remove temporary files after backend crash.
extra_desc |
context | sighup
vartype | bool
source | default
min_val |
max_val |
enumvals |
boot_val | on
reset_val | on
sourcefile |
sourceline |
pending_restart | f
standby wal receiver 接收时延优化
- 场景: 基于流复制的只读实例, 基于流复制的高可用等场景
- 价值: 需等待startup process replay结束, 大幅度降低standby在重启后的wal接收延迟
参考文档:https://github.com/digoal/blog/blob/master/202105/20210513_02.md
当前内容版权归 阿里云RDS-数据库内核组 或其关联方所有,如需对内容或内容相关联开源项目进行关注与资助,请访问 阿里云RDS-数据库内核组 .