PostgreSQL Service

Service is an abstraction for database functionality & access

The concepts of Service and Access are proposed for HA PostgreSQL clusters.


Personal User

After completing the singleton deployment, port 5432 of this node provides PostgreSQL services, and port 80 provides UI class services.

On the current meta node, executing psql with no parameters using the admin user can connect directly to the local pre-defined meta database.

When accessing PG from the host using the client tool, you can use the URL.

  1. psql postgres://dbuser_dba:DBUser.DBA@10.10.10.10/meta # dbsu direct connection
  2. psql postgres://dbuser_meta:DBUser.Meta@10.10.10.10/meta # business user direct connect

You can use the admin user specified by pg_admin_username and pg_admin_password or a business user (dbuser_meta) pre-defined in the meta database to access this database.

When using a HA database cluster deployed with Pigsty, it is not recommended to access the database service using IP direct connection.


Service

Service in the form of functionality that a database cluster provides.

In a production env, a replication-based primary-replica database cluster is used. There is one and only one primary in the cluster that can accept writes, while the other replicas will continuously get logs from the primary to keep up with it. Also, replicas can host read-only requests.

In addition, for production envs with short high-frequency connections, we also pool requests via Pgbouncer to reduce connection creation overhead. However, in ETL and change execution scenarios, we need to bypass the connection pool and access the database directly.

In addition, HA clusters have a failover feature that causes changes to the cluster’s primary. HA clustering solutions, therefore, require that write traffic can automatically adapt to changes in the cluster’s primary.

These different access requirements (read/write separation, pooling, and direct connection, failover auto-adaptation) are eventually abstracted into the concept of Service.

In general, a database cluster must provide a service.

  • read-write service (primary): can write to the database

For a production database cluster, at least two services should be provided.

  • read-write service (primary): can write to the database

  • read-only service (replica): access to the replica

There may be other services.

  • offline: For ETL and personal queries.
  • standby: Read-only service with synchronous commit and no replication delay.
  • delayed: Allows to access old data before a fixed time interval.
  • default: Service that allows admin users to manage the database directly, bypassing the connection pool.

Default Services

Pigsty provides four services by default: primary, replica, default, and offline.

New services can be defined for global or individual clusters via config files.

serviceportpurposedescription
primary5433production read/writeconnect to primary via connection pool
replica5434production read-onlyconnection to replica via connection pool
default5436managementdirect connection to primary
offline5438ETL/personal userdirect connection to offline

Take the meta DB pg-meta as an example:

  1. psql postgres://dbuser_meta:DBUser.Meta@pg-meta:5433/meta # production read/write
  2. psql postgres://dbuser_meta:DBUser.Meta@pg-meta:5434/meta # production read-only
  3. psql postgres://dbuser_dba:DBUser.DBA@pg-meta:5436/meta # Direct connect primary
  4. psql postgres://dbuser_stats:DBUser.Stats@pg-meta:5438/meta # Direct connect offline

These four services are described in detail below.

Primary Service

The Primary service is used for online production read and write access, and it maps the cluster’s port 5433 to the primary connection pool (default 6432) port.

The Primary service selects all instances in the cluster as members, but only the primary can take on traffic because there is one and only one instance /primary with a true health check.

  1. # primary service will route {ip|name}:5433 to primary pgbouncer (5433->6432 rw)
  2. - name: primary # service name {{ pg_cluster }}-primary
  3. src_ip: "*"
  4. src_port: 5433
  5. dst_port: pgbouncer # 5433 route to pgbouncer
  6. check_url: /primary # primary health check, success when instance is primary
  7. selector: "[]" # select all instance as primary service candidate

The HA component Patroni on the primary returns 200 against the Primary health check and is used to ensure that the cluster does not have another primary.

When the cluster fails over, the health check is true for the new primary and false for the old one, so traffic is migrated to the new primary. The business side will notice about 30 seconds of Primary service unavailability time.

Replica Service

The Replica service is used to online produce read-only access, and it maps the cluster’s port 5434, to the replica connection pool (default 6432) port.

The Replica service selects all instances in the cluster as members, but only those with an accurate health check /read-only can take on traffic, and that health check returns success for all instances that can take on read-only traffic.

By default, only replicas carry read-only requests, and the Replica service defines selector_backup, a selector that adds the cluster’s primary as a backup instance to the Replica service. The primary will only start taking read-only traffic when all replicas are down.

Another role as a backup instance is offline, which is usually dedicated to OLAP/ETL/personal queries and is not suitable for mixing with online queries, so offline is only used to take on read-only traffic when all replicas are down.

  1. # replica service will route {ip|name}:5434 to replica pgbouncer (5434->6432 ro)
  2. - name: replica # service name {{ pg_cluster }}-replica
  3. src_ip: "*"
  4. src_port: 5434
  5. dst_port: pgbouncer
  6. check_url: /read-only # read-only health check. (including primary)
  7. selector: "[]" # select all instance as replica service candidate
  8. selector_backup: "[? pg_role == `primary` || pg_role == `offline` ]"

Default Service

The Default service is used for online primary direct connections, which map the cluster’s port 5436 to the primary Postgres (default 5432) port.

Default service targets interactive read and writes access, including executing admin commands, performing DDL changes, connecting to the primary to perform DML, and performing CDC. Default service forwards traffic directly to Postgres, bypassing Pgbouncer.

The Default service is similar to the Primary service, using the same config entry.

  1. # default service will route {ip|name}:5436 to primary postgres (5436->5432 primary)
  2. - name: default # service's actual name is {{ pg_cluster }}-default
  3. src_ip: "*" # service bind ip address, * for all, vip for cluster virtual ip address
  4. src_port: 5436 # bind port, mandatory
  5. dst_port: postgres # target port: postgres|pgbouncer|port_number , pgbouncer(6432) by default
  6. check_method: http # health check method: only http is available for now
  7. check_port: patroni # health check port: patroni|pg_exporter|port_number , patroni by default
  8. check_url: /primary # health check url path, / as default
  9. check_code: 200 # health check http code, 200 as default
  10. selector: "[]" # instance selector
  11. haproxy: # haproxy specific fields
  12. maxconn: 3000 # default front-end connection
  13. balance: roundrobin # load balance algorithm (roundrobin by default)
  14. default_server_options: 'inter 3s fastinter 1s downinter 5s rise 3 fall 3 on-marked-down shutdown-sessions slowstart 30s maxconn 3000 maxqueue 128 weight 100'

Offline Service

Offline service is used for offline access and personal queries. It maps the cluster’s 5438 port, to the offline Postgres (default 5432) port.

The Offline service targets interactive read-only access, including ETL, offline analytics queries, and individual user queries. Offline service also forwards traffic directly to Postgres, bypassing Pgbouncer.

Offline instances are those where pg_role is offline or tagged with pg_offline_query. The other replica outside the Offline will act as a backup instance for Offline and will still be able to get services from other replicas when the Offline is down.

  1. # offline service will route {ip|name}:5438 to offline postgres (5438->5432 offline)
  2. - name: offline # service name {{ pg_cluster }}-offline
  3. src_ip: "*"
  4. src_port: 5438
  5. dst_port: postgres
  6. check_url: /replica # offline MUST be a replica
  7. selector: "[? pg_role == `offline` || pg_offline_query ]" # instances with pg_role == 'offline' or instance marked with 'pg_offline_query == true'
  8. selector_backup: "[? pg_role == `replica` && !pg_offline_query]" # replica are used as backup server in offline service

User-Defined Service

In addition to the default services configured by pg_services above, users can define additional services for the PostgreSQL cluster in the pg_services_extra config entry.

A cluster can define multiple services, each containing any number of cluster members, distinguished by port. The following code defines a new service standby that uses port 5435 to provide sync read functionality. This service will read from standby (or primary) in the cluster, thus ensuring that all reads are done without latency.

  1. # standby service will route {ip|name}:5435 to sync replica's pgbouncer (5435->6432 standby)
  2. - name: standby # required, service name, the actual svc name will be prefixed with `pg_cluster`, e.g: pg-meta-standby
  3. src_ip: "*" # required, service bind ip address, `*` for all ip, `vip` for cluster `vip_address`
  4. src_port: 5435 # required, service exposed port (work as kubernetes service node port mode)
  5. dst_port: postgres # optional, destination port, postgres|pgbouncer|<port_number> , pgbouncer(6432) by default
  6. check_method: http # optional, health check method: http is the only available method for now
  7. check_port: patroni # optional, health check port: patroni|pg_exporter|<port_number> , patroni(8008) by default
  8. check_url: /read-only?lag=0 # optional, health check url path, / by default
  9. check_code: 200 # optional, health check expected http code, 200 by default
  10. selector: "[]" # required, JMESPath to filter inventory ()
  11. selector_backup: "[? pg_role == `primary`]" # primary used as backup server for standby service (will not work because /sync for )
  12. haproxy: # optional, adhoc parameters for haproxy service provider (vip_l4 is another service provider)
  13. maxconn: 3000 # optional, max allowed front-end connection
  14. balance: roundrobin # optional, haproxy load balance algorithm (roundrobin by default, other: leastconn)
  15. default_server_options: 'inter 3s fastinter 1s downinter 5s rise 3 fall 3 on-marked-down shutdown-sessions slowstart 30s maxconn 3000 maxqueue 128 weight 100'

Required

  • Name (service.name).

    The full name of the service is prefixed by the database cluster name and suffixed by service.name, connected by -.

  • Port (service.port).

    In Pigsty, services are exposed as NodePort by default, so the port is mandatory. However, if you use an LB service access scheme, you can also differentiate the services in other ways.

  • selector (service.selector).

    The selector specifies the instance members of the service, in the form of JMESPath, filtering variables from all cluster instances. The default [] selector picks all cluster members.

Optional

  • backup selector (service.selector).

    The backup selector selects or marks the list of instances for service backup, i.e., the backup instance takes over the service only when all other members of the cluster fail.

  • source_ip (service.src_ip).

    Indicates the IP used externally by the service. The default is *, which is all IP on the localhost. Using vip will use the vip_address variable to take the value, or you can also fill in the specific IP supported by the NIC.

  • Host port (service.dst_port).

    Indicates which port the service’s traffic will be directed to on the target instance. postgres will point to the port the database is listening on, pgbouncer will point to the port the connection pool is listening on, or you can fill in a fixed port.

  • health check method (service.check_method):

    How does the service check the health status of the instance? Currently, only HTTP is supported.

  • Health check port (service.check_port):

    Which port does the service check the instance on to get the health status of the instance? patroni will get it from Patroni (default 8008), pg_exporter will get it from PG Exporter (default 9630), or the user can fill in a custom port.

  • Health check path (service.check_url):

    The URL PATH is used by the service to perform HTTP checks. / is used by default for health checks, and PG Exporter and Patroni provide a variety of health check methods that can be used to differentiate between primary and replica traffic. For example, /primary will only return success for the primary, and /replica will only return success for the replica. /read-only, on the other hand, will return success for any instance that supports read-only (including the primary).

  • health check code (service.check_code):

    The code expected for HTTP health checks, default, is 200.

  • Haproxy-specific configuration (service.haproxy) :

    Proprietary config entries about the service provisioning software (HAProxy).

Service Implementation

Pigsty currently uses HAProxy-based service implementation by default and provides a sample implementation based on Layer 4 LB (L4VIP). For details, please refer to the section access.

Last modified 2022-06-04: fii en docs batch 2 (61bf601)