Disaster Recovery

Restoring Mon Quorum

Under extenuating circumstances, the mons may lose quorum. If the mons cannot form quorum again, there is a manual procedure to get the quorum going again. The only requirement is that at least one mon is still healthy. The following steps will remove the unhealthy mons from quorum and allow you to form a quorum again with a single mon, then grow the quorum back to the original size.

For example, if you have three mons and lose quorum, you will need to remove the two bad mons from quorum, notify the good mon that it is the only mon in quorum, and then restart the good mon.

Stop the operator

First, stop the operator so it will not try to failover the mons while we are modifying the monmap

  1. kubectl -n rook-ceph-system delete deployment rook-ceph-operator

Inject a new monmap

WARNING: Injecting a monmap must be done very carefully. If run incorrectly, your cluster could be permanently destroyed.

The Ceph monmap keeps track of the mon quorum. We will update the monmap to only contain the healthy mon. In this example, the healthy mon is rook-ceph-mon-b, while the unhealthy mons are rook-ceph-mon-a and rook-ceph-mon-c.

Connect to the pod of a healthy mon and run the following commands.

  1. kubectl -n rook-ceph exec -it <mon-pod> bash
  2. # set a few simple variables
  3. cluster_namespace=rook
  4. good_mon_id=rook-ceph-mon-b
  5. monmap_path=/tmp/monmap
  6. # make sure the quorum lock file does not exist
  7. rm -f /var/lib/rook/${good_mon_id}/data/store.db/LOCK
  8. # extract the monmap to a file
  9. ceph-mon -i ${good_mon_id} --extract-monmap ${monmap_path} \
  10. --cluster=${cluster_namespace} --mon-data=/var/lib/rook/${good_mon_id}/data \
  11. --conf=/var/lib/rook/${good_mon_id}/${cluster_namespace}.config \
  12. --keyring=/var/lib/rook/${good_mon_id}/keyring \
  13. --monmap=/var/lib/rook/${good_mon_id}/monmap
  14. # review the contents of the monmap
  15. monmaptool --print /tmp/monmap
  16. # remove the bad mon(s) from the monmap
  17. monmaptool ${monmap_path} --rm <bad_mon>
  18. # in this example we remove mon0 and mon2:
  19. monmaptool ${monmap_path} --rm rook-ceph-mon-a
  20. monmaptool ${monmap_path} --rm rook-ceph-mon-c
  21. # inject the monmap into the good mon
  22. ceph-mon -i ${good_mon_id} --inject-monmap ${monmap_path} \
  23. --cluster=${cluster_namespace} --mon-data=/var/lib/rook/${good_mon_id}/data \
  24. --conf=/var/lib/rook/${good_mon_id}/${cluster_namespace}.config \
  25. --keyring=/var/lib/rook/${good_mon_id}/keyring

Exit the shell to continue.

Edit the rook configmap for mons

Edit the configmap that the operator uses to track the mons.

  1. kubectl -n rook-ceph edit configmap rook-ceph-mon-endpoints

In the data element you will see three mons such as the following (or more depending on your moncount):

  1. data: rook-ceph-mon-a=10.100.35.200:6790;rook-ceph-mon-b=10.100.35.233:6790;rook-ceph-mon-c=10.100.35.12:6790

Delete the bad mons from the list, for example to end up with a single good mon:

  1. data: rook-ceph-mon-b=10.100.35.233:6790

Save the file and exit.

Restart the mon

You will need to restart the good mon pod to pick up the changes. Delete the good mon pod and kubernetes will automatically restart the mon.

  1. kubectl -n rook-ceph delete pod -l mon=rook-ceph-mon-b

Start the rook toolbox and verify the status of the cluster.

  1. ceph -s

The status should show one mon in quorum. If the status looks good, your cluster should be healthy again.

Restart the operator

Start the rook operator again to resume monitoring the health of the cluster.

  1. # create the operator. it is safe to ignore the errors that a number of resources already exist.
  2. kubectl create -f operator.yaml

The operator will automatically add more mons to increase the quorum size again, depending on the monCount.