背景

在RDS环境中,多租户使用同一台主机是很常见的事情,为了隔离用户资源,有很多手段,例如使用虚拟机,或者CGROUP技术。以CGROUP为例,可以控制进程的CPU使用、内存使用、块设备的IOPS或者吞吐速率等资源使用。限制资源的好处是可以在共用的硬件层面为多个用户提供承诺的性能指标。当然这种方法也有一定的弊端,例如当用户需要运行消耗较多资源的SQL的时候,无法利用机器的空闲资源,因为它被限制住了。还有一些弊端可能导致RDS的不稳定,本文将展开讨论其中的弊端之一,资源限制是如何导致备库延迟的。

分析过程

创建一个实例,32G内存(确保测试时不被OOM),1000GB存储空间,将块设备的读写IOPS都限制到200。

修改限制可以在控制台或者主机上操作,主机上操作举例:

获取postmaster进程号

  1. $ps -ewf|grep pg468619
  2. pg468619 24097 1 0 Jul29 ? 00:00:17 /path/to/postgres -D /path/to/data

查看这个进程所在的组

  1. $cat /proc/24097/cgroup
  2. 3:memory:/rule3038
  3. 2:cpu:/rule3038
  4. 1:blkio:/rule3038

目前已限制在200

  1. $cat /path/to/rule3038/blkio.throttle.read_iops_device
  2. 8:0 200
  3. 8:16 200
  4. $cat /path/to/rds/rule3038/blkio.throttle.write_iops_device
  5. 8:0 200
  6. 8:16 200

如果没有限制,可以直接修改以上两个文件,或者使用cgset例如:

  1. cgset -r blkio.throttle.write_iops_device='8:0 200' rule3038
  2. cgset -r blkio.throttle.write_iops_device='8:16 200' rule3038

对应的块设备

  1. $ll /dev/sda
  2. brw-rw---- 1 root disk 8, 0 May 18 13:59 /dev/sda
  3. $ll /dev/sdb
  4. brw-rw---- 1 root disk 8, 16 Jul 31 07:45 /dev/sdb

接下来需要做的是创建测试用户,以及测试函数和测试表:

连接到数据库

  1. $psql -p 3038 -U pg468619 postgres

创建测试用户

  1. postgres=# create role digoal login encrypted password 'digoal';
  2. postgres=# grant all on database postgres to digoal;

创建测试表,用于存储主节点插入XLOG的LSN以及备库接收、写、FLUSH、APPLY的XLOG LSN位置,以便计算主节点产生XLOG的速度和备库延迟的字节数。

  1. postgres=# create table tbl_xlog_insert(id int,
  2. sec numeric default EXTRACT(EPOCH from clock_timestamp()),
  3. xlog_lsn text default pg_current_xlog_insert_location(),
  4. sent_lsn pg_lsn,
  5. write_lsn pg_lsn,
  6. flush_lsn pg_lsn,
  7. replay_lsn pg_lsn);

创建测试函数,目的是快速的产生大量XLOG,同时记录XLOG的位置到测试表。

  1. postgres=# \c postgres postgres
  2. postgres=# create or replace function f_test(int) returns void as $_$
  3. declare
  4. v_tbl name := 'tbl_'||pg_backend_pid();
  5. ddl text := ' (id int8';
  6. begin
  7. set synchronous_commit=off;
  8. execute 'DROP TABLE IF EXISTS '||v_tbl;
  9. -- 创建一个201个字段的测试表.
  10. for i in 1..200 loop
  11. ddl := ddl||',c'||i||' int8 default 0';
  12. end loop;
  13. ddl := ddl||') ';
  14. execute 'create table '||v_tbl||ddl||' with (autovacuum_enabled=off, toast.autovacuum_enabled=off)';
  15. execute 'insert into '||v_tbl||' select generate_series(1,'||$1||')';
  16. execute 'DROP TABLE IF EXISTS '||v_tbl;
  17. -- 插入当前的XLOG位置, standby接收到的位置. 可能有一定的反馈延迟,延迟10毫秒
  18. perform pg_sleep(0.01);
  19. insert into tbl_xlog_insert(id, sent_lsn,write_lsn,flush_lsn,replay_lsn) select pg_backend_pid(), sent_location,write_location,flush_location,replay_location from pg_stat_replication limit 1;
  20. end;
  21. $_$ language plpgsql strict security definer;
  22. postgres=# GRANT execute on function f_test(int) to digoal;

在测试机创建测试脚本

  1. postgres@digoal-> cat test.sql
  2. select f_test(5000);

测试时,使用1个连接就够了,太多了又会带来新的问题,心跳检测失败可能导致主备切换,这也是目前资源限制引起的一个问题,后面再来谈这个问题如何处理。

  1. postgres@digoal-> pgbench -M prepared -n -r -f ./test.sql -c 1 -j 1 -t 100 -h digoal0001.pg.rds.aliyuncs.com -p 3999 -U digoal postgres

在主节点查询主库产生XLOG的速度,以及备库延迟的情况:

  1. postgres=# select round(0.001*pg_xlog_location_diff((lead(xlog_lsn,1) over(order by sec))::pg_lsn,xlog_lsn::pg_lsn)/(case lead(sec,1) over(order by sec)-sec when 0 then 1 else lead(sec,1) over(order by sec)-sec end), 2)as gen_xlog_KBpsec,
  2. round(pg_xlog_location_diff(xlog_lsn::pg_lsn,sent_lsn)/1024.0, 2) AS sent_xlog_delay_KB,
  3. round(pg_xlog_location_diff(xlog_lsn::pg_lsn,write_lsn)/1024.0, 2) AS write_xlog_delay_KB,
  4. round(pg_xlog_location_diff(xlog_lsn::pg_lsn,flush_lsn)/1024.0, 2) AS flush_xlog_delay_KB,
  5. round(pg_xlog_location_diff(xlog_lsn::pg_lsn,replay_lsn)/1024.0, 2) AS replay_xlog_delay_KB
  6. from tbl_xlog_insert where id=(select id from tbl_xlog_insert order by sec desc limit 1) order by sec;
  7. gen_xlog_kbpsec | sent_xlog_delay_kb | write_xlog_delay_kb | flush_xlog_delay_kb | replay_xlog_delay_kb
  8. -----------------+--------------------+---------------------+---------------------+----------------------
  9. 814.87 | 8616.37 | 8616.37 | 8616.37 | 8616.54
  10. 809.63 | 10752.44 | 13952.44 | 16896.44 | 16898.04
  11. 802.75 | 12747.91 | 14795.91 | 17227.91 | 17227.92
  12. 821.96 | 12173.92 | 16269.92 | 19981.92 | 23022.60
  13. 820.87 | 13167.77 | 17515.32 | 19799.77 | 23168.63
  14. 581.98 | 15177.93 | 19553.93 | 23009.93 | 26213.85
  15. 1320.19 | 12735.36 | 17087.36 | 20415.36 | 24112.03
  16. 818.03 | 17281.54 | 21761.54 | 25729.54 | 28560.27
  17. 808.66 | 17093.73 | 21445.73 | 25413.73 | 26377.87
  18. . . . . .
  19. . . . . .
  20. . . . . .
  21. . . . . .
  22. 806.56 | 79064.04 | 83544.04 | 87128.04 | 90281.66
  23. 804.10 | 82864.95 | 87472.95 | 87472.95 | 92209.77
  24. 802.53 | 84598.83 | 88310.83 | 91766.83 | 95275.68
  25. 598.90 | 82998.66 | 90038.66 | 90038.66 | 93495.39
  26. 1304.52 | 84704.81 | 89184.81 | 92768.81 | 92768.95
  27. 807.26 | 86847.66 | 91199.66 | 94527.66 | 97583.47
  28. 823.86 | 89209.00 | 93561.00 | 97785.00 | 97785.32
  29. 805.17 | 96967.23 | 101319.23 | 105287.23 | 108879.23
  30. 829.41 | 101531.95 | 105883.95 | 109979.95 | 113468.85
  31. 797.81 | 101999.23 | 106479.23 | 110447.23 | 110448.48
  32. 609.47 | 102789.51 | 107269.51 | 111109.51 | 114741.74
  33. | 99381.25 | 103861.25 | 107573.25 | 111083.62
  34. (100 rows)

XLOG产生速度约800KB/s, 由于XLOG盘的IOPS限制,导致STANDBY延迟不断增大。为了排除并不是网络延迟导致的延迟,我们可以使用systemtap跟踪wal sender、wal writter进程的IOPS以及wal sender进程的网络吞吐。

需要先安装kernel debuginfo,跟踪指定postgres wal writter进程,获取读写IOPS:

  1. \# vi io.stp
  2. global reads, writes, total_io
  3. probe vfs.read.return {
  4. reads[execname()] += 1
  5. }
  6. probe vfs.write.return {
  7. writes[execname()] += 1
  8. }
  9. \# print every 1 seconds
  10. probe timer.s(1) {
  11. foreach (name in writes)
  12. total_io[name] += writes[name]
  13. foreach (name in reads)
  14. total_io[name] += reads[name]
  15. printf ("%16s\t%10s\t%10s\n", "Process", "SUM Read IO", "SUM Written IO")
  16. foreach (name in total_io-)
  17. printf("%16s\t%10d\t%10d\n", name,
  18. reads[name], writes[name])
  19. delete reads
  20. delete writes
  21. print("\n")
  22. }
  23. \# print total_io when exit
  24. probe end {
  25. foreach (name in total_io-)
  26. printf("%16s\t%10d\t%10d\n", name,
  27. reads[name], writes[name])
  28. delete total_io
  29. print("\n")
  30. }
  31. \# stap -vp 5 -DMAXSKIPPED=9999 -DSTP_NO_OVERLOAD -DMAXTRYLOCK=100 ./io.stp -x $pid

跟踪指定postgres wal sender进程,获取网络传输速率:

  1. \# vi net.stp
  2. global ifxmit, ifrecv
  3. global ifmerged
  4. // 定义3个全局变量, 分别用于存储传输,接收,以及合并数组;
  5. // 传输和接收数组中存储统计信息;
  6. // 合并数组存储网络接口上按照pid(), dev_name, execname(), uid()维度累加的传输和接收包个数.
  7. probe netdev.transmit
  8. {
  9. ifxmit[pid(), dev_name, execname(), uid()] <<< length
  10. }
  11. // netdev.transmit 探针, 网络设备传输buffer时触发.
  12. probe netdev.receive
  13. {
  14. ifrecv[pid(), dev_name, execname(), uid()] <<< length
  15. }
  16. // netdev.receive 探针, 从网络设备接收数据时触发.
  17. function print_activity()
  18. {
  19. printf("%5s %5s %-7s %7s %7s %7s %7s %-15s\n",
  20. "PID", "UID", "DEV", "XMIT_PK", "RECV_PK",
  21. "XMIT_KB", "RECV_KB", "COMMAND")
  22. foreach ([pid, dev, exec, uid] in ifrecv) {
  23. ifmerged[pid, dev, exec, uid] += @count(ifrecv[pid,dev,exec,uid]);
  24. }
  25. foreach ([pid, dev, exec, uid] in ifxmit) {
  26. ifmerged[pid, dev, exec, uid] += @count(ifxmit[pid,dev,exec,uid]);
  27. }
  28. // 按照倒序输出
  29. foreach ([pid, dev, exec, uid] in ifmerged-) {
  30. n_xmit = @count(ifxmit[pid, dev, exec, uid])
  31. n_recv = @count(ifrecv[pid, dev, exec, uid])
  32. printf("%5d %5d %-7s %7d %7d %7d %7d %-15s\n",
  33. pid, uid, dev, n_xmit, n_recv,
  34. n_xmit ? @sum(ifxmit[pid, dev, exec, uid])/1024 : 0,
  35. n_recv ? @sum(ifrecv[pid, dev, exec, uid])/1024 : 0,
  36. exec)
  37. }
  38. // 输出: pid, uid, 网络接口, 传输包数, 接收包数, 传输KB, 接收KB, command.
  39. print("\n")
  40. delete ifxmit
  41. delete ifrecv
  42. delete ifmerged
  43. }
  44. // print_activity 函数, 按照pid,dev,exec,uid维度, 根据网络接收和传输包数倒序输出.
  45. // n_xmit ? @sum(ifxmit[pid, dev, exec, uid])/1024 : 0,
  46. // n_recv ? @sum(ifrecv[pid, dev, exec, uid])/1024 : 0,
  47. // 表示传输和接收的KB数.
  48. // print_activity 函数的末尾清除三个全局变量的值. 下次调用时重新输出上一次输出以来的统计信息.
  49. probe timer.s(1), end, error
  50. {
  51. print_activity()
  52. }
  53. // 每1秒调用一次print_activity .
  54. \# stap -vp 5 -DMAXSKIPPED=9999 -DSTP_NO_OVERLOAD -DMAXTRYLOCK=100 ./io.stp -x $pid

因为生产上没有debuginfo的包,没有办法完成以上跟踪。不过为了验证不是网络原因导致的问题,还有其他手段。sar -n DEV 1 100000也可以查看网络传输。iotop则可以观察进程的IO,很遗憾生产上也没有iotop。

那么接下来,放开cgroup的限制,我们可以通过sar -n DEV 1 100000观察一下网络带宽: 主库和备库的postgres进程都从cgroup移除:

  1. \#cat /proc/50511/cgroup
  2. 3:memory:/rule3008
  3. 2:cpu:/rule3008
  4. 1:blkio:/rule3008
  5. \#cgdelete memory:/rule3008
  6. \#cgdelete cpu:/rule3008
  7. \#cgdelete blkio:/rule3008
  8. $cat /proc/119876/cgroup
  9. 3:memory:/rule3090
  10. 2:cpu:/rule3090
  11. 1:blkio:/rule3090
  12. \#cgdelete memory:/rule3090
  13. \#cgdelete cpu:/rule3090
  14. \#cgdelete blkio:/rule3090

目前的网络带宽是1GB:

  1. \#ethtool eth0|grep Speed
  2. Speed: 1000Mb/s

再次测试,我们可以观察到网络传输达到了1GB/8,约120MB/s:

  1. 01:54:24 PM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s
  2. 01:54:25 PM lo 132.29 132.29 29.42 29.42 0.00 0.00 0.00
  3. 01:54:25 PM eth0 85685.42 6495.83 125161.86 538.14 0.00 0.00 0.00
  4. 01:54:25 PM eth1 125.00 77.08 32.55 34.17 0.00 0.00 2.08
  5. 01:54:25 PM bond0 85808.33 6570.83 125192.78 572.14 0.00 0.00 2.08
  6. 01:54:25 PM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s
  7. 01:54:26 PM lo 125.00 125.00 28.50 28.50 0.00 0.00 0.00
  8. 01:54:26 PM eth0 85412.50 6509.38 124869.50 519.73 0.00 0.00 0.00
  9. 01:54:26 PM eth1 129.17 97.92 11.04 57.40 0.00 0.00 0.00
  10. 01:54:26 PM bond0 85544.79 6611.46 124883.69 577.44 0.00 0.00 0.00

当然,完全开放IOPS也有风险。如果网络带宽低于主节点产生WAL的速度,可能导致备库需要的WAL在主库WAL被快速回收,这种情况下,备库如果不能从任何地方获取到需要的XLOG,则备库会处于青黄不接的状态。我们需要考虑调整相应的wal keep segments,或者备库使用归档进行恢复,脱离青黄不接的状态。

以下是完全放开IOPS限制后带来的问题,由于写WAL速度超过了网络带宽120MB/S,达到了300MB/S,所以备库同样会延迟:

  1. postgres=# select round(0.001*pg_xlog_location_diff((lead(xlog_lsn,1) over(order by sec))::pg_lsn,xlog_lsn::pg_lsn)/(case lead(sec,1) over(order by sec)-sec when 0 then 1 else lead(sec,1) over(order by sec)-sec end), 2)as gen_xlog_KBpsec,
  2. round(pg_xlog_location_diff(xlog_lsn::pg_lsn,sent_lsn)/1024.0, 2) AS sent_xlog_delay_KB,
  3. round(pg_xlog_location_diff(xlog_lsn::pg_lsn,write_lsn)/1024.0, 2) AS write_xlog_delay_KB,
  4. round(pg_xlog_location_diff(xlog_lsn::pg_lsn,flush_lsn)/1024.0, 2) AS flush_xlog_delay_KB,
  5. round(pg_xlog_location_diff(xlog_lsn::pg_lsn,replay_lsn)/1024.0, 2) AS replay_xlog_delay_KB
  6. from tbl_xlog_insert where id=(select id from tbl_xlog_insert order by sec desc limit 1) order by sec;
  7. gen_xlog_kbpsec | sent_xlog_delay_kb | write_xlog_delay_kb | flush_xlog_delay_kb | replay_xlog_delay_kb
  8. -----------------+--------------------+---------------------+---------------------+----------------------
  9. 306692.66 | 2763682.05 | 2769058.05 | 2769058.05 | 2773902.84
  10. 310323.46 | 2821964.55 | 2826060.55 | 2826060.55 | 2830312.84
  11. 286290.82 | 2878171.34 | 2882267.34 | 2882267.34 | 2888606.55
  12. 308019.02 | 2933665.58 | 2937761.58 | 2937761.58 | 2941141.12
  13. 347211.31 | 2992420.60 | 2996516.60 | 2996516.60 | 3000172.73
  14. 339915.61 | 3049958.89 | 3054054.89 | 3054054.89 | 3060592.84
  15. 345896.39 | 3110687.52 | 3115295.52 | 3115295.52 | 3123686.84
  16. 326032.85 | 3169349.81 | 3174213.81 | 3174341.81 | 3176994.25
  17. 309214.90 | 3223985.07 | 3231409.07 | 3231409.07 | 3234051.52
  18. 335529.65 | 3273305.11 | 3277529.11 | 3277529.11 | 3286191.45
  19. 329873.46 | 3330912.32 | 3335008.32 | 3335008.32 | 3343094.45
  20. 293908.19 | 3386875.67 | 3392251.67 | 3392251.67 | 3398439.91
  21. 329390.57 | 3439531.98 | 3443627.98 | 3443627.98 | 3448171.41

当备库需要的XLOG在主库被覆盖后,备库无法获得对应的WAL,这种情况下,只能重新搭建备库,或者从归档获取需要的XLOG:

  1. 91812 2015-07-31 05:55:40 UTC 00000 LOG: started streaming WAL from primary at B/7A000000 on timeline 1
  2. 91812 2015-07-31 05:55:40 UTC XX000 FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000010000000B0000007A has already been removed
  3. 91833 2015-07-31 05:55:45 UTC 00000 LOG: started streaming WAL from primary at B/7A000000 on timeline 1
  4. 91833 2015-07-31 05:55:45 UTC XX000 FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000010000000B0000007A has already been removed

解决办法探讨

问题:

  1. 限制XLOG盘的IOPS,在用户长时间临近IOPS限制的操作下,可能导致备库延迟,同时可能带来心跳超时的问题。
  2. 如果不限制XLOG盘的IOPS,那么可能导致主节点产生WAL的速度过快,超出网卡的带宽(目前来看,换万兆网就可以解决这个问题),这样的话主库的XLOG可能会被覆盖掉,备库会无法获取需要的XLOG。

解决办法探讨:

  1. 不限制XLOG盘的IOPS,同时使用万兆网卡。
  2. 备库配置restore command,当主库的WAL被覆盖后,使用归档恢复。

参考

红帽cgroup文档 systemtap文档

原文:http://mysql.taobao.org/monthly/2015/08/02/