Meta Node

Meta-Nodes are nodes installed with Pigsty, with admin capability and infra components

Meta-Nodes are nodes installed with Pigsty, with admin capability and a complete infra set.

Current node are marked as meta during ./configure, populated in the meta group of inventory.

Pigsty requires at least one meta node per environment. It will be used as a command center for the entire environment. It’s the meta node’s responsibility to keep states, manage configs, launch plays, run tasks, and collect metrics & logs. The infra set is deployed on meta nodes by default: Nginx, Grafana, Prometheus, Alertmanager, NTP, DNS Nameserver, and DCS.

Reuse Meta Node

The meta node can also be reused as a common node, and a PostgreSQL cluster named pg-meta is created by default on the meta. Supporting additional features: CMDB, routine tasks report, extended apps, log analysis & data analysis, etc.

Taking Pigsty Sandbox as an example, the distribution of components on the nodes is shown below.

Meta Node - 图1

The sandbox consists of a meta node with 4 nodes. The sandbox is deployed with one set of infra and 2 database clusters. meta is the meta node, deployed with infra and reused as a regular node, deployed with meta DB cluster pg-meta. node-1, node-2, and node-3 are normal nodes deployed with cluster pg-test.

Meta Node Service

The services running on the meta node are shown below.

ComponentPortDescriptionDefault Domain
Nginx80Web Service Portalpigsty
Yum80LocalYum Repoyum.pigsty
Grafana3000Monitoring Dashboards/Visualization Platformg.pigsty
AlertManager9093Alert aggregation & notification servicea.pigsty
Prometheus9090Monitoring Time-Series Databasep.pigsty
Loki3100Logging Databasel.pigsty
Consul (Server)8500Distributed Configuration Management and Service Discoveryc.pigsty
Docker2375Container Platform-
PostgreSQL5432Pigsty CMDB-
lAnsible-Controller-
Consul DNS8600DNS Service Discovery powered by Consul-
Dnsmasq53DNS Name Server(Optional)-
NTP123NTP Time Server(Optional)-
Pgbouncer6432Pgbouncer Connection Pooling Service-
Patroni8008Patroni HA Component-
Haproxy Primary5433Primary Pooling: Read/Write Service-
Haproxy Replica5434Replica Pooling: Read-Only Service-
Haproxy Default5436Primary Direct Connect Service-
Haproxy Offline5438Offline Direct Connect: Offline Read Service-
Haproxy Admin9101HAProxy admin & metrics-
PG Exporter9630PG Monitoring Metrics Exporter-
PGBouncer Exporter9631PGBouncer Monitoring Metrics Exporter-
Node Exporter9100Node monitoring metrics-
Promtail9080Logger agent-
vip-manager-Bind VIP to the primary

Meta Node - 图2

Meta Node & DCS

By default, DCS Servers (Consul or Etcd) will be deployed on the meta nodes, or you can use External DCS Cluster. Any infra outside DCS will be deployed on the meta node as a peer-to-peer copy. The number of meta nodes requires a minimum of 1, recommends 3, and recommends no more than 5.

DCS Servers are used for leader election in HA Scenarios. Shutting down the DCS servers will demote ALL clusters, which reject any writes by default! So make sure you have enough availability on these DCS Servers, at least stronger than PostgreSQL itself. It’s recommended to add more meta nodes or use an external independently maintained, HA DCS cluster for production-grade deployment.

Multiple Meta Nodes

Usually, one meta node is sufficient for basic usage, two meta nodes can be used as standby backup, and 3 meta nodes can support a minimal meaningful production-grade DCS Servers themselves!

Pigsty will set DCS Servers on all meta nodes by default for the sake of “Battery-Included”. But it’s meaningless to have more than 3 meta nodes. If you are seeking HA DCS Servies. Using an external DCS Cluster with 3~5 nodes would be more appropriate.

Meta nodes are configured under all.children.meta.host in the inventory. They will be marked with meta_node: true flag. The node runs configure will be marked as meta, and multiple meta nodes have to be configured manually, check pigsty-dcs3.yml for example.

If you are not using any external DCS as an arbiter. It requires at least 3 nodes to form a meaningful HA Cluster that allows one node failure.

Last modified 2022-06-04: fii en docs batch 2 (61bf601)