Restoring etcd quorum
You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]
You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]
If you lose etcd quorum, you can restore it.
If you run etcd on a separate host, you must back up etcd, take down your etcd cluster, and form a new one. You can use one healthy etcd node to form a new cluster, but you must remove all other healthy nodes.
If you run etcd as static pods on your master nodes, you stop the etcd pods, create a temporary cluster, and then restart the etcd pods.
During etcd quorum loss, applications that run on OKD are unaffected. However, the platform functionality is limited to read-only operations. You cannot take action such as scaling an application up or down, changing deployments, or running or modifying builds. |
To confirm the loss of etcd quorum, run one of the following commands and confirm that the cluster is unhealthy:
If you use the etcd v2 API, run the following command:
# etcd_ctl=2 etcdctl --cert-file=/etc/origin/master/master.etcd-client.crt \
--key-file /etc/origin/master/master.etcd-client.key \
--ca-file /etc/origin/master/master.etcd-ca.crt \
--endpoints="https://*master-0.example.com*:2379,\
https://*master-1.example.com*:2379,\
https://*master-2.example.com*:2379"\
cluster-health
member 165201190bf7f217 is unhealthy: got unhealthy result from https://master-0.example.com:2379
member b50b8a0acab2fa71 is unreachable: [https://master-1.example.com:2379] are all unreachable
member d40307cbca7bc2df is unreachable: [https://master-2.example.com:2379] are all unreachable
cluster is unhealthy
If you use the v3 API, run the following command:
# ETCDCTL_API=3 etcdctl --cert=/etc/origin/master/master.etcd-client.crt \
--key=/etc/origin/master/master.etcd-client.key \
--cacert=/etc/origin/masterca.crt \
--endpoints="https://*master-0.example.com*:2379,\
https://*master-1.example.com*:2379,\
https://*master-2.example.com*:2379"\
endpoint health
https://master-0.example.com:2379 is unhealthy: failed to connect: context deadline exceeded
https://master-1.example.com:2379 is unhealthy: failed to connect: context deadline exceeded
https://master-2.example.com:2379 is unhealthy: failed to connect: context deadline exceeded
Error: unhealthy cluster
Note the member IDs and host names of the hosts. You use one of the nodes that can be reached to form a new cluster.
Restoring etcd quorum for separate services
Backing up etcd
When you back up etcd, you must back up both the etcd configuration files and the etcd data.
You can use either etcd v2 or v3 API versions to back up etcd because both versions contain commands to back up the v2 and v3 data.
Backing up etcd configuration files
The etcd configuration files to be preserved are all stored in the /etc/etcd
directory of the instances where etcd is running. This includes the etcd configuration file (/etc/etcd/etcd.conf
) and the required certificates for cluster communication. All those files are generated at installation time by the Ansible installer.
Procedure
For each etcd member of the cluster, back up the etcd configuration.
$ ssh master-0
# mkdir -p /backup/etcd-config-$(date +%Y%m%d)/
# cp -R /etc/etcd/ /backup/etcd-config-$(date +%Y%m%d)/
The certificates and configuration files on each etcd cluster member are unique. |
Backing up etcd data
Prerequisites
The OKD installer creates aliases to avoid typing all the flags named However, the |
Before backing up etcd:
etcdctl
binaries must be available or, in containerized installations, therhel7/etcd
container must be available.Ensure that the OKD API service is running.
Ensure connectivity with the etcd cluster (port 2379/tcp).
Ensure the proper certificates to connect to the etcd cluster.
Procedure
While the The |
Back up the etcd data:
If you run etcd on standalone hosts and use the v2 API, take the following actions:
Stop all etcd services by removing the etcd pod definition:
# mkdir -p /etc/origin/node/pods-stopped
# mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/
Create the etcd data backup and copy the etcd
db
file:# mkdir -p /backup/etcd-$(date +%Y%m%d)
# etcdctl2 backup \
--data-dir /var/lib/etcd \
--backup-dir /backup/etcd-$(date +%Y%m%d)
# cp /var/lib/etcd/member/snap/db /backup/etcd-$(date +%Y%m%d)
A
/backup/etcd-<date>/
directory is created, where<date>
represents the current date, which must be an external NFS share, S3 bucket, or any external storage location.In the case of an all-in-one cluster, the etcd data directory is located in the
/var/lib/origin/openshift.local.etcd
directory.Reboot the node to restart the etcd service.
# reboot
If you run etcd on standalone hosts and use the v3 API, run the following commands:
Clusters upgraded from previous versions of OKD might contain v2 data stores. Back up all etcd data stores.
Back up etcd v3 data:
Make a snapshot of the etcd node:
# systemctl show etcd --property=ActiveState,SubState
# mkdir -p /backup/etcd-$(date +%Y%m%d)
# etcdctl3 snapshot save /backup/etcd-$(date +%Y%m%d)/db
The
etcdctl snapshot save
command requires the etcd service to be running.Stop all etcd services by removing the etcd pod definition and rebooting the host:
# mkdir -p /etc/origin/node/pods-stopped
# mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/
Create the etcd data backup and copy the etcd
db
file:# etcdctl2 backup \
--data-dir /var/lib/etcd \
--backup-dir /backup/etcd-$(date +%Y%m%d)
A
/backup/etcd-<date>/
directory is created, where<date>
represents the current date, which must be an external NFS share, S3 bucket, or any external storage location.In the case of an all-in-one cluster, the etcd data directory is located in the
/var/lib/origin/openshift.local.etcd
directory.
If etcd runs as a static pod, run the following commands:
If you use static pods, use the v3 API.
Obtain the etcd endpoint IP address from the static pod manifest:
$ export ETCD_POD_MANIFEST="/etc/origin/node/pods/etcd.yaml"
$ export ETCD_EP=$(grep https ${ETCD_POD_MANIFEST} | cut -d '/' -f3)
Obtain the etcd pod name:
$ oc login -u system:admin
$ export ETCD_POD=$(oc get pods -n kube-system | grep -o -m 1 '\S*etcd\S*')
Take a snapshot of the etcd data in the pod and store it locally:
$ oc project kube-system
$ oc exec ${ETCD_POD} -c etcd -- /bin/bash -c "ETCDCTL_API=3 etcdctl \
--cert /etc/etcd/peer.crt \
--key /etc/etcd/peer.key \
--cacert /etc/etcd/ca.crt \
--endpoints <ETCD_EP> \ (1)
snapshot save /var/lib/etcd/snapshot.db"
1 Specify the etcd endpoint IP address that you obtained.
Removing an etcd host
If an etcd host fails beyond restoration, remove it from the cluster. To recover from an etcd quorum loss, you must also remove all healthy etcd nodes but one from your cluster.
Steps to be performed on all masters hosts
Procedure
Remove each other etcd host from the etcd cluster. Run the following command for each etcd node:
# etcdctl -C https://<surviving host IP address>:2379 \
--ca-file=/etc/etcd/ca.crt \
--cert-file=/etc/etcd/peer.crt \
--key-file=/etc/etcd/peer.key member remove <failed member ID>
Remove the other etcd hosts from the
/etc/origin/master/master-config.yaml
+master configuration file on every master:etcdClientInfo:
ca: master.etcd-ca.crt
certFile: master.etcd-client.crt
keyFile: master.etcd-client.key
urls:
- https://master-0.example.com:2379
- https://master-1.example.com:2379 (1)
- https://master-2.example.com:2379 (1)
1 The host to remove. Restart the master API service on every master:
# master-restart api restart-master controller
Steps to be performed in the current etcd cluster
Procedure
Remove the failed host from the cluster:
# etcdctl2 cluster-health
member 5ee217d19001 is healthy: got healthy result from https://192.168.55.12:2379
member 2a529ba1840722c0 is healthy: got healthy result from https://192.168.55.8:2379
failed to check the health of member 8372784203e11288 on https://192.168.55.21:2379: Get https://192.168.55.21:2379/health: dial tcp 192.168.55.21:2379: getsockopt: connection refused
member 8372784203e11288 is unreachable: [https://192.168.55.21:2379] are all unreachable
member ed4f0efd277d7599 is healthy: got healthy result from https://192.168.55.13:2379
cluster is healthy
# etcdctl2 member remove 8372784203e11288 (1)
Removed member 8372784203e11288 from cluster
# etcdctl2 cluster-health
member 5ee217d19001 is healthy: got healthy result from https://192.168.55.12:2379
member 2a529ba1840722c0 is healthy: got healthy result from https://192.168.55.8:2379
member ed4f0efd277d7599 is healthy: got healthy result from https://192.168.55.13:2379
cluster is healthy
1 The remove
command requires the etcd ID, not the hostname.To ensure the etcd configuration does not use the failed host when the etcd service is restarted, modify the
/etc/etcd/etcd.conf
file on all remaining etcd hosts and remove the failed host in the value for theETCD_INITIAL_CLUSTER
variable:# vi /etc/etcd/etcd.conf
For example:
ETCD_INITIAL_CLUSTER=master-0.example.com=https://192.168.55.8:2380,master-1.example.com=https://192.168.55.12:2380,master-2.example.com=https://192.168.55.13:2380
becomes:
ETCD_INITIAL_CLUSTER=master-0.example.com=https://192.168.55.8:2380,master-1.example.com=https://192.168.55.12:2380
Restarting the etcd services is not required, because the failed host is removed using
etcdctl
.Modify the Ansible inventory file to reflect the current status of the cluster and to avoid issues when re-running a playbook:
[OSEv3:children]
masters
nodes
etcd
... [OUTPUT ABBREVIATED] ...
[etcd]
master-0.example.com
master-1.example.com
If you are using Flannel, modify the
flanneld
service configuration located at/etc/sysconfig/flanneld
on every host and remove the etcd host:FLANNEL_ETCD_ENDPOINTS=https://master-0.example.com:2379,https://master-1.example.com:2379,https://master-2.example.com:2379
Restart the
flanneld
service:# systemctl restart flanneld.service
Creating a single-node etcd cluster
To restore the full functionality of your OKD instance, make a remaining etcd node a standalone etcd cluster.
Procedure
On the etcd node that you did not remove from the cluster, stop all etcd services by removing the etcd pod definition:
# mkdir -p /etc/origin/node/pods-stopped
# mv /etc/origin/node/pods/etcd.yaml /etc/origin/node/pods-stopped/
# systemctl stop atomic-openshift-node
# mv /etc/origin/node/pods-stopped/etcd.yaml /etc/origin/node/pods/
Run the etcd service on the host, forcing a new cluster.
These commands create a custom file for the etcd service, which adds the
--force-new-cluster
option to the etcd start command:# mkdir -p /etc/systemd/system/etcd.service.d/
# echo "[Service]" > /etc/systemd/system/etcd.service.d/temp.conf
# echo "ExecStart=" >> /etc/systemd/system/etcd.service.d/temp.conf
# sed -n '/ExecStart/s/"$/ --force-new-cluster"/p' \
/usr/lib/systemd/system/etcd.service \
>> /etc/systemd/system/etcd.service.d/temp.conf
# systemctl daemon-reload
# master-restart etcd
List the etcd member and confirm that the member list contains only your single etcd host:
# etcdctl member list
165201190bf7f217: name=192.168.34.20 peerURLs=http://localhost:2380 clientURLs=https://master-0.example.com:2379 isLeader=true
After restoring the data and creating a new cluster, you must update the
peerURLs
parameter value to use the IP address where etcd listens for peer communication:# etcdctl member update 165201190bf7f217 https://192.168.34.20:2380 (1)
1 165201190bf7f217
is the member ID shown in the output of the previous command, andhttps://192.168.34.20:2380
is its IP address.To verify, check that the IP is in the member list:
$ etcdctl2 member list
5ee217d17301: name=master-0.example.com peerURLs=https://*192.168.55.8*:2380 clientURLs=https://192.168.55.8:2379 isLeader=true
Adding etcd nodes after restoring
After the first instance is running, you can add multiple etcd servers to your cluster.
Procedure
Get the etcd name for the instance in the
ETCD_NAME
variable:# grep ETCD_NAME /etc/etcd/etcd.conf
Get the IP address where etcd listens for peer communication:
# grep ETCD_INITIAL_ADVERTISE_PEER_URLS /etc/etcd/etcd.conf
If the node was previously part of a etcd cluster, delete the previous etcd data:
# rm -Rf /var/lib/etcd/*
On the etcd host where etcd is properly running, add the new member:
# etcdctl3 member add *<name>* \
--peer-urls="*<advertise_peer_urls>*"
The command outputs some variables. For example:
ETCD_NAME="master2"
ETCD_INITIAL_CLUSTER="master-0.example.com=https://192.168.55.8:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"
Add the values from the previous command to the
/etc/etcd/etcd.conf
file of the new host:# vi /etc/etcd/etcd.conf
Start the etcd service in the node joining the cluster:
# systemctl start etcd.service
Check for error messages:
# master-logs etcd etcd
Repeat the previous steps for every etcd node to be added.
Once you add all the nodes, verify the cluster status and cluster health:
# etcdctl3 endpoint health --endpoints="https://<etcd_host1>:2379,https://<etcd_host2>:2379,https://<etcd_host3>:2379"
https://master-0.example.com:2379 is healthy: successfully committed proposal: took = 1.423459ms
https://master-1.example.com:2379 is healthy: successfully committed proposal: took = 1.767481ms
https://master-2.example.com:2379 is healthy: successfully committed proposal: took = 1.599694ms
# etcdctl3 endpoint status --endpoints="https://<etcd_host1>:2379,https://<etcd_host2>:2379,https://<etcd_host3>:2379"
https://master-0.example.com:2379, 40bef1f6c79b3163, 3.2.5, 28 MB, true, 9, 2878
https://master-1.example.com:2379, 1ea57201a3ff620a, 3.2.5, 28 MB, false, 9, 2878
https://master-2.example.com:2379, 59229711e4bc65c8, 3.2.5, 28 MB, false, 9, 2878
Restoring etcd quorum for static pods
If you lose etcd quorum on a cluster that uses static pods for etcd, take the following steps:
Procedure
Stop the etcd pod:
mv /etc/origin/node/pods/etcd.yaml .
Temporarily force a new cluster on the etcd host:
$ cp /etc/etcd/etcd.conf etcd.conf.bak
$ echo "ETCD_FORCE_NEW_CLUSTER=true" >> /etc/etcd/etcd.conf
Restart the etcd pod:
$ mv etcd.yaml /etc/origin/node/pods/.
Stop the etcd pod and remove the
FORCE_NEW_CLUSTER
command:$ mv /etc/origin/node/pods/etcd.yaml .
$ rm /etc/etcd/etcd.conf
$ mv etcd.conf.bak /etc/etcd/etcd.conf
Restart the etcd pod:
$ mv etcd.yaml /etc/origin/node/pods/.