Meta Node
Meta-Nodes are nodes installed with Pigsty, with admin capability and infra components
Meta-Nodes are nodes installed with Pigsty, with admin capability and a complete infra set.
Current node are marked as meta during ./configure
, populated in the meta
group of inventory.
Pigsty requires at least one meta node per environment. It will be used as a command center for the entire environment. It’s the meta node’s responsibility to keep states, manage configs, launch plays, run tasks, and collect metrics & logs. The infra set is deployed on meta nodes by default: Nginx, Grafana, Prometheus, Alertmanager, NTP, DNS Nameserver, and DCS.
Reuse Meta Node
The meta node can also be reused as a common node, and a PostgreSQL cluster named pg-meta
is created by default on the meta. Supporting additional features: CMDB, routine tasks report, extended apps, log analysis & data analysis, etc.
Taking Pigsty Sandbox as an example, the distribution of components on the nodes is shown below.
The sandbox consists of a meta node with 4 nodes. The sandbox is deployed with one set of infra and 2 database clusters. meta
is the meta node, deployed with infra and reused as a regular node, deployed with meta DB cluster pg-meta
. node-1
, node-2
, and node-3
are normal nodes deployed with cluster pg-test
.
Meta Node Service
The services running on the meta node are shown below.
Component | Port | Description | Default Domain |
---|---|---|---|
Nginx | 80 | Web Service Portal | pigsty |
Yum | 80 | LocalYum Repo | yum.pigsty |
Grafana | 3000 | Monitoring Dashboards/Visualization Platform | g.pigsty |
AlertManager | 9093 | Alert aggregation & notification service | a.pigsty |
Prometheus | 9090 | Monitoring Time-Series Database | p.pigsty |
Loki | 3100 | Logging Database | l.pigsty |
Consul (Server) | 8500 | Distributed Configuration Management and Service Discovery | c.pigsty |
Docker | 2375 | Container Platform | - |
PostgreSQL | 5432 | Pigsty CMDB | - |
lAnsible | - | Controller | - |
Consul DNS | 8600 | DNS Service Discovery powered by Consul | - |
Dnsmasq | 53 | DNS Name Server(Optional) | - |
NTP | 123 | NTP Time Server(Optional) | - |
Pgbouncer | 6432 | Pgbouncer Connection Pooling Service | - |
Patroni | 8008 | Patroni HA Component | - |
Haproxy Primary | 5433 | Primary Pooling: Read/Write Service | - |
Haproxy Replica | 5434 | Replica Pooling: Read-Only Service | - |
Haproxy Default | 5436 | Primary Direct Connect Service | - |
Haproxy Offline | 5438 | Offline Direct Connect: Offline Read Service | - |
Haproxy Admin | 9101 | HAProxy admin & metrics | - |
PG Exporter | 9630 | PG Monitoring Metrics Exporter | - |
PGBouncer Exporter | 9631 | PGBouncer Monitoring Metrics Exporter | - |
Node Exporter | 9100 | Node monitoring metrics | - |
Promtail | 9080 | Logger agent | - |
vip-manager | - | Bind VIP to the primary |
Meta Node & DCS
By default, DCS Servers (Consul or Etcd) will be deployed on the meta nodes, or you can use External DCS Cluster. Any infra outside DCS will be deployed on the meta node as a peer-to-peer copy. The number of meta nodes requires a minimum of 1, recommends 3, and recommends no more than 5.
DCS Servers are used for leader election in HA Scenarios. Shutting down the DCS servers will demote ALL clusters, which reject any writes by default! So make sure you have enough availability on these DCS Servers, at least stronger than PostgreSQL itself. It’s recommended to add more meta nodes or use an external independently maintained, HA DCS cluster for production-grade deployment.
Multiple Meta Nodes
Usually, one meta node is sufficient for basic usage, two meta nodes can be used as standby backup, and 3 meta nodes can support a minimal meaningful production-grade DCS Servers themselves!
Pigsty will set DCS Servers on all meta nodes by default for the sake of “Battery-Included”. But it’s meaningless to have more than 3 meta nodes. If you are seeking HA DCS Servies. Using an external DCS Cluster with 3~5 nodes would be more appropriate.
Meta nodes are configured under all.children.meta.host
in the inventory. They will be marked with meta_node: true
flag. The node runs configure
will be marked as meta, and multiple meta nodes have to be configured manually, check pigsty-dcs3.yml for example.
If you are not using any external DCS as an arbiter. It requires at least 3 nodes to form a meaningful HA Cluster that allows one node failure.
Last modified 2022-06-04: fii en docs batch 2 (61bf601)