NODES Config

Describe nodes with node config

Pigsty provides host provisioning and monitoring functions. The nodes.yml playbook can be executed to configure the node to the corresponding state and incorporate it into the Pigsty monitor system.

IDNameSectionTypeLevelComment
300meta_nodeNODE_IDENTITYboolCmark this node as meta
301nodenameNODE_IDENTITYstringInode instance identity
302node_clusterNODE_IDENTITYstringCnode cluster identity
303nodename_overwriteNODE_IDENTITYboolCoverwrite hostname with nodename
304nodename_exchangeNODE_IDENTITYboolCexchange static hostname
310node_etc_hosts_defaultNODE_DNSstring[]Cstatic DNS records
311node_etc_hostsNODE_DNSstring[]C/Iextra static DNS records
312node_dns_methodNODE_DNSenumChow to setup dns service?
313node_dns_serversNODE_DNSstring[]Cdynamic DNS servers
314node_dns_optionsNODE_DNSstring[]C/etc/resolv.conf options
320node_repo_methodNODE_PAenumChow to use yum repo (local)
321node_repo_removeNODE_REPOboolCremove existing repo file?
322node_repo_local_urlsNODE_REPOurl[]Clocal yum repo url list
330node_packages_defaultNODE_PACKAGESstring[]Cpkgs to be installed on all node
331node_packagesNODE_PACKAGESstring[]Cextra pkgs to be installed
332node_packages_metaNODE_PACKAGESstring[]Gmeta node only packages
333node_packages_meta_pipNODE_PACKAGESstringGmeta node pip3 packages
340node_disable_firewallNODE_TUNEboolCdisable firewall?
341node_disable_selinuxNODE_TUNEboolCdisable selinux?
342node_disable_numaNODE_TUNEboolCdisable numa?
343node_disable_swapNODE_TUNEboolCdisable swap?
344node_static_networkNODE_TUNEboolCuse static DNS config?
345node_disk_prefetchNODE_TUNEboolCenable disk prefetch?
346node_kernel_modulesNODE_TUNEstring[]Ckernel modules to be installed
347node_tuneNODE_TUNEenumCnode tune mode
348node_sysctl_paramsNODE_TUNEdictCextra kernel parameters
350node_data_dirNODE_ADMINpathCdefault data disk mountpoint
351node_admin_enabledNODE_ADMINboolGcreate admin user?
352node_admin_uidNODE_ADMINintGadmin user UID
353node_admin_usernameNODE_ADMINstringGadmin user name
354node_admin_ssh_exchangeNODE_ADMINboolCexchange admin ssh keys?
355node_admin_pk_currentNODE_ADMINboolApks to be added to admin
356node_admin_pk_listNODE_ADMINkey[]Cadd current user’s pkey?
360node_timezoneNODE_TIMEstringCnode timezone
361node_ntp_enabledNODE_TIMEboolCsetup ntp on node?
362node_ntp_serviceNODE_TIMEenumCntp mode: ntp or chrony?
363node_ntp_serversNODE_TIMEstring[]Cntp server list
364node_crontab_overwriteNODE_TIMEstring[]C/Ioverwrite instead of append /etc/crontab
365node_crontabNODE_TIMEstring[]C/Icrontab list of node
370docker_enabledDOCKERboolCdocker enabled?
371docker_cgroups_driverDOCKERintCdocker cgroup driver
372docker_registry_mirrorsDOCKERstringCdocker registry mirror location
373docker_image_cacheDOCKERstringCdocker image cache tarball
380node_exporter_enabledNODE_EXPORTERboolCnode_exporter enabled?
381node_exporter_portNODE_EXPORTERintCnode_exporter listen port
382node_exporter_optionsNODE_EXPORTERstringC/Inode_exporter extra cli args
390promtail_enabledPROMTAILboolCpromtail enabled ?
391promtail_cleanPROMTAILboolC/Aremove promtail status file ?
392promtail_portPROMTAILintGpromtail listen port
393promtail_optionsPROMTAILstringC/Ipromtail cli args
394promtail_positionsPROMTAILstringCpath to store promtail status file

NODE_IDENTITY

Each node has identity parameters that are configured through the parameters in <cluster>.hosts and <cluster>.vars.

Pigsty uses IP as a unique identifier for database nodes. This IP must be the IP that the database instance listens to and serves externally, but it is inappropriate to use a public IP. Users can also indirectly operate the management target node through an SSH tunnel or springboard machine transit. However, the primary IPv4 is still the core identity of the node when identifying the database node. This is very important. The IP is the inventory_hostname of the host in the inventory, which is reflected as the key in the <cluster>.hosts object.

In the Pigsty monitor system, nodes also have two crucial identity parameters: nodename and node_cluster. These will be used in the monitor system as the node’s instance identity (ins) and cluster identity (cls). Pigsty uses node-exclusive 1:1 deployment by default, so the identity params of the instances (pg_cluster and pg_instance) can be borrowed to the ins and cls tags of the nodes via the pg_hostname parameter.

nodename and node_cluster are not mandatory; when left blank or empty, nodename will use the node’s current hostname, while node_cluster will use the fixed default value: nodes.

NameTypeLevelNecessityComment
inventory_hostnameip-RequiredNode IP
nodenamestringIOptionalNode Name
node_clusterstringCOptionalNode cluster name

The following cluster config declares a three-node node cluster:

  1. node-test:
  2. hosts:
  3. 10.10.10.11: { nodename: node-test-1 }
  4. 10.10.10.12: { nodename: node-test-2 }
  5. 10.10.10.13: { nodename: node-test-3 }
  6. vars:
  7. node_cluster: node-test

meta_node

This node is a meta node, type: bool, level: C, default value: false.

Nodes under the meta grouping carry this flag in the inventory by default. Nodes with this flag will be additionally configured at node package installation with:

Install the RPM pkgs specified by node_packages_meta and install the Python pkgs set by node_packages_meta_pip.

nodename

Specifies the node name, type: string, level: I, the default value is null.

Null or empty string means nodename will be set to node’s current hostname.

no name is specified for the node, and the existing Hostname is used directly as the node name.

The node name nodename will be used as the name of the node instance (ins tag) in the Pigsty monitor system. In addition, if nodename_overwrite is true, the node name will also be used as the HOSTNAME.

Note: If the pg_hostname option is enabled, Pigsty will borrow the identity parameter of the one-by-one corresponding PG instance on the current node, such as pg-test-1, as the node name when initializing the node.

node_cluster

Node cluster name, type: string, level: C, default value: "nodes".

The default null value will directly use the fixed value nodes as the node cluster identity.

The node cluster name node_cluster will be used as the node cluster (cls) label in the Pigsty monitor system.

Note: If the pg_hostname option is enabled, Pigsty will borrow the identity parameter of the one-by-one corresponding PG cluster on the current node, such as pg-test, as the node cluster name when initializing the node.

nodename_overwrite

Override machine HOSTNAME with node name, type: bool, level: C, default value: true.

Defaults to true, a non-empty node name nodename will override the current hostname of the node.

No changes are made to the hostname if the nodename parameter is undefined, empty, or an empty string.

nodename_exchange

Exchange hostnames between playbook nodes, type: bool, level: C, default value: false.

When this parameter is enabled, node names are exchanged between the same group of nodes executing the nodes.yml playbook, written to /etc/hosts.


NODE_DNS

Pigsty configs static DNS records and dynamic DNS resolver for the nodes.

If you already have a DNS server, set node_dns_method to none to disable dynamic DNS setup.

node_etc_hosts

DNS records specific to the cluster instance level, type: string[], level: C/I, default value is an empty array [].

node_etc_hosts is an array. Each element is a string shaped like an ip domain_name, representing a DNS resolution record. Each of which is written to /etc/hosts when the machine node is initialized, suitable for cluster/instance specific records.

node_etc_hosts_default

Write to static DNS resolution of the machine, type: string[], level: C, default value:

  1. node_etc_hosts_default: # static dns records in /etc/hosts
  2. - 10.10.10.10 meta pigsty c.pigsty g.pigsty l.pigsty p.pigsty a.pigsty cli.pigsty lab.pigsty api.pigsty

node_etc_hosts_default is an array. Each element is a string shaped like an ip domain_name, representing a DNS resolution record. Each of which is written to /etc/hosts when the machine node is initialized, suitable for global config of infra addresses.

Make sure to write a DNS record like 10.10.10.10 pigsty yum.pigsty to /etc/hosts to ensure that the local yum repo can be accessed using the domain name before the DNS Nameserver starts.

node_dns_method

Config DNS server, type: enum, level: C, default value: "add".

The default config of dynamic DNS servers for machine nodes has three modes:

  • add: Append the records in node_dns_servers to /etc/resolv.conf and keep the existing DNS servers. (default)
  • overwrite: Overwrite /etc/resolv.conf with the record in node_dns_servers
  • none: If a DNS server is provided in the production env, the DNS server config can be skipped.

node_dns_servers

Config dynamic DNS server list, type: string[], level: C, default value is 10.10.10.10.

Pigsty adds meta nodes as DNS Server by default, and DNSMASQ on the meta node responds to DNS requests in the env.

  1. node_dns_servers: # dynamic nameserver in /etc/resolv.conf
  2. - 10.10.10.10

node_dns_options

If node_dns_method is configured as add or overwrite, the records in this config entry will be appended or overwritten to /etc/resolv.conf. Please see the Linux doc for /etc/resolv.conf for the exact format.

The default parsing options added by Pigsty:

  1. - options single-request-reopen timeout:1 rotate
  2. - domain service.consul

NODE_REPO

Pigsty configure yum repos & install packages from it.

node_repo_method

A node using Yum repo, type: enum, level: C, default value: "local".

The machine node Yum software repo is configured in three modes:

  • local: Use the local Yum repo on the meta node, the default behavior (recommended).
  • public: To install using internet sources, write the public repo in repo_upstream to /etc/yum.repos.d/.
  • none: No config and modification of local repos.

node_repo_remove

Remove nodes with existing Yum repos, type: bool, level: C, default value: true.

If enabled, Pigsty will move repo file in /etc/yum.repos.d to backup dir: /etc/yum.repos.d/backup

node_repo_local_urls

URL address of the local repo, type: url[], level: C, default value is local.

node_repo_method configured as local, the Repo file URLs listed here will be downloaded to /etc/yum.repos.d.

Here is an array of Repo File URLs that Pigsty will add by default to the machine’s source config for the local Yum repos on the meta node.

  1. node_repo_local_urls:
  2. - http://yum.pigsty/pigsty.repo

NODE_PACKAGES

node_packages

List of extra installed software for the node, type: string[], level: C, default value:

There is a list of extra pkgs to install via yum, with an empty list by default.

Like node_packages_default, the former is usually configured globally, while node_packages makes exceptions for specific nodes.

node_packages_default

List of node installation software, type: string[], level: C, default value:

The package list is an array, but each element can contain multiple pkgs separated by commas. The list of pkgs installed by Pigsty by default is as follows:

  1. node_packages_meta: # packages for meta nodes only
  2. - grafana,prometheus2,alertmanager,loki,nginx_exporter,blackbox_exporter,pushgateway,redis,postgresql14
  3. - nginx,ansible,pgbadger,python-psycopg2,dnsmasq,polysh,coreutils,diffutils

node_packages_meta

List of software required by the meta node, type: string[], level: G, default value:

  1. node_packages_meta: # packages for meta nodes only
  2. - grafana,prometheus2,alertmanager,loki,nginx_exporter,blackbox_exporter,pushgateway,redis,postgresql14
  3. - nginx,ansible,pgbadger,python-psycopg2,dnsmasq,polysh,coreutils,diffutils

The pkgs listed in node_packages_meta will only be installed on the meta node, and infra software generally used on the meta node must be specified here.

node_packages_meta_pip

Package installed on the meta node via pip3, type: string, level: G, default value: "jupyterlab".

The package will be downloaded to {{ nginx_home }}/{{ repo_name }}/python dir and then installed uniformly.

Currently, jupyterlab will be installed by default, providing a complete Python runtime env.


NODE_TUNE

Configure some features, kernel modules, and tuning templates on the node.

node_disable_firewall

Turn off node firewall, type: bool, level: C, default value: true, please keep it off.

node_disable_selinux

Close node SELINUX, type: bool, level: C, default value: true, please keep it off.

node_disable_numa

Close the node NUMA, type: bool, level: C, default value: false.

Boolean flag, default is not off. Note that turning off NUMA requires a reboot of the machine before it can take effect!

If you don’t know how to set the affinity with a specific CPU core, it is recommended to turn off NUMA when using the database in a production env.

node_disable_swap

Turn off node SWAP, type: bool, level: C, default value: false.

Turning off SWAP is not recommended and can be done to improve performance if there is enough memory and the database is deployed exclusively.

SWAP should be disabled when your node is used for a Kubernetes deployment.

node_static_network

Use static DNS servers, Type: bool, Level: C, Default: true, Enabled by default.

Enabling static networking means that machine reboots will not overwrite your DNS Resolv config with NIC changes. It is recommended to allow for it.

node_disk_prefetch

Enable disk pre-reading, type: bool, level: C, default value: false, not enabled by default.

Instances deployed against HDDs optimize throughput and are recommended to be enabled when using HDDs.

node_kernel_modules

Enabled kernel module, type: string[], level: C, default value:

An array consisting of kernel module names declaring the kernel modules that need to be installed on the node. Pigsty will enable the following kernel modules by default:

  1. node_kernel_modules: [ softdog, ip_vs, ip_vs_rr, ip_vs_rr, ip_vs_wrr, ip_vs_sh ]

node_tune

Node tuning mode, type: enum, level: C, default value: "tiny".

Prefabricated solutions for machine tuning, based on the tuned service. There are four pre-production models:

  • tiny: Micro Virtual Machine
  • oltp: Regular OLTP templates with optimized latency
  • olap : Regular OLAP templates to optimize throughput
  • crit: Core financial business templates, optimizing the number of dirty pages

Usually, the database tuning template pg_conf should be paired with the machine tuning template. See Customize PGSQL Template for details.

node_sysctl_params

OS kernel parameter, type: dict, level: C, default value is an empty dictionary. Dictionary K-V structure, Key is kernel sysctl parameter name, Value is the parameter value.


NODE_ADMIN

Host Node Admin Users.

node_data_dir

Mountpoint of major data disk, level: C, default value: /data.

If specified, this path will be used as major data disk mountpoint.

And a dir will be created and throwing a warning if path not exists.

The data dir is owned by root with mode 0777.

node_admin_enabled

Create admin user, type: bool, level: G, default value: true.

To create an admin user on each node (password-free sudo and ssh), an admin user named dba (uid=88) will be created, which can access other nodes in the env and perform sudo from the meta node via SSH password-free.

node_admin_uid

Admin user UID, type: int, level: G, default value: 88, note UID namespace conflict.

node_admin_username

Admin username, type: string, level: G, default value: "dba".

node_admin_ssh_exchange

Exchange node admin SSH keys between instances, type: bool, level: C, default value: true.

When enabled, Pigsty will exchange SSH public keys between members during playbook execution, allowing admins node_admin_username to access each other from different nodes.

node_admin_pk_current

Whether to add the public key of the current node & user to the admin account, type: bool, level: A, default value: true.

When enabled, on the current node, the SSH public key (~/.ssh/id_rsa.pub) of the current user is copied to the authorized_keys of the target node admin user.

When deploying in a production env, be sure to pay attention to this parameter, which installs the default public key of the user currently executing the command to the admin user of all machines.

node_admin_pk_list

The list of public keys for login able admin, type: key[], level: C, default value is an empty array; the demo has the default public key for vagrant users.

Each element of the array is a string containing the key written to the admin user ~/.ssh/authorized_keys, and the user with the corresponding private key can log in as an admin user.

When deploying in production envs, be sure to note this parameter and add only trusted keys to this list.


NODE_TIME

The node time zone is synchronized with time.

If the node is already configured with an NTP server, you can configure node_ntp_enabled to false to skip the setting of the NTP service.

node_timezone

NTP time zone setting, type: string, level: C, default value is null.

The default time zone used in the demo is "Asia/Hong_Kong" please adjust it according to your actual situation. (Please don’t use Asia/Shanghai time zone, the abbreviation CST will cause a series of log time zone parsing problems)

Select false, or leave it blank, and Pigsty will not modify the time zone config of this node.

node_ntp_enabled

Is the NTP service configured? , type: bool, level: C, default value: true.

Value is true: Pigsty will override the node’s /etc/ntp.conf or /etc/chrony.conf by filling in the NTP server specified by node_ntp_servers.

If the server node is already configured with an NTP server, it is recommended to turn it off and use the original NTP server.

node_ntp_service

NTP service type: ntp or chrony, type: enum, level: C, default value: "ntp".

Specify the type of NTP service used by the system; by default, ntp is used as the time service:

  • ntp: Traditional NTP Service
  • chrony: Time services used by CentOS 7/8 by default

It only takes effect if node_ntp_enabled is true.

node_ntp_servers

List of NTP servers, type: string[], level: C, default value:

  1. - pool cn.pool.ntp.org iburst
  2. - pool pool.ntp.org iburst
  3. - pool time.pool.aliyun.com iburst
  4. - server 10.10.10.10 iburst

It only takes effect if node_ntp_enabled is true.

node_crontab_overwrite

Overwrite instead of append /etc/crontab, type: bool, level: C/I, default value: true

if true, records in node_crontab will render to/etc/crontab instead of appending to it.

node_crontab

Crontab of this node, type: string[], level: C/I, default value: []

Each element of the array is a string in /etc/crontab.


DOCKER

Pigsty install docker on all meta nodes by default, disabled on common nodes by default.

docker_enabled

Enable docker on current node? type: bool, level: C, default value: false. default true on meta nodes.

docker_cgroups_driver

CGroup drivers for docker, type: string, level: C, default value: systemd.

docker_registry_mirrors

Docker registry mirror list, type: string[], level:C, default value: [].

docker_image_cache

Local image cache, type: string, level: C, default value: "/var/pigsty/docker.tar.lz4".

The local image cache will be loaded into docker when the target path exists.


NODE_EXPORTER

NodeExporter is used to collect monitor metrics data from the host.

node_exporter_enabled

Enable node indicator collector, type: bool, level: C, default value: true.

node_exporter_port

NodeExposure Port, type: int, level: C, default value: 9100.

node_exporter_options

Node metrics collection option, type: string, level: C/I, default value: "--no-collector.softnet --no-collector.nvme --collector.ntp --collector.tcpstat --collector.processes"

Pigsty enables ntp, tcpstat, processes three extra metrics, collectors, by default, and disables softnet, nvme two default metrics collectors.


PROMTAIL

Host log collection component, used with Loki infrastructure config.

promtail_enabled

Enable Protail log collection service at the current node, type: bool, level: C, default value: true.

When promtail is enabled, Pigsty will generate a config file for Promtail, as defined in the inventory, to grab the following logs and send them to the Loki instance specified by loki_endpoint.

  • INFRA: Infra logs, collected only on meta nodes.

    • nginx-access: /var/log/nginx/access.log
    • nginx-error: /var/log/nginx/error.log
    • grafana: /var/log/grafana/grafana.log
  • NODES: Host node logs, collected on all nodes.

    • syslog: /var/log/messages
    • dmesg: /var/log/dmesg
    • cron: /var/log/cron
  • PGSQL: PostgreSQL logs, collected when a node is defined with pg_cluster.

    • postgres: /pg/data/log/*.csv
    • patroni: /pg/log/patroni.log
    • pgbouncer: /var/log/pgbouncer/pgbouncer.log
  • REDIS: Redis logs, collected when a node is defined with redis_cluster.

    • redis: /var/log/redis/*.log

promtail_clean

Remove existing state information when installing protail, type: bool, level: C/A, default value: false.

The default is not to clean up; when you choose to clean up, Pigsty will remove the existing state file promtail_positions when deploying Promtail, which means that Promtail will recollect all logs on the current node and send them to Loki.

promtail_port

The default port used by promtail, type: int, level: G, default value: 9080.

promtail_options

Promtail CLI param, type: string, level: C/I, default value: "-config.file=/etc/promtail.yml -config.expand-env=true".

Extra CLI params passed in when running the protail database, default value: '-config.file=/etc/promtail.yml -config.expand-env=true'.

There are already params for specifying the config file path and expanding the environment variables in the config file, which are not recommended to be modified.

promtail_positions

Path to promtail status file, type: string, level: C, default value:"/var/log/positions.yaml"

Promtail records the consumption offsets of all logs, which are periodically written to the file specified by promtail_positions.

Last modified 2022-06-04: fii en docs batch 2 (61bf601)