Running ZooKeeper, A Distributed System Coordinator

This tutorial demonstrates running Apache Zookeeper on Kubernetes using StatefulSets, PodDisruptionBudgets, and PodAntiAffinity.

Before you begin

Before starting this tutorial, you should be familiar with the following Kubernetes concepts:

You must have a cluster with at least four nodes, and each node requires at least 2 CPUs and 4 GiB of memory. In this tutorial you will cordon and drain the cluster’s nodes. This means that the cluster will terminate and evict all Pods on its nodes, and the nodes will temporarily become unschedulable. You should use a dedicated cluster for this tutorial, or you should ensure that the disruption you cause will not interfere with other tenants.

This tutorial assumes that you have configured your cluster to dynamically provision PersistentVolumes. If your cluster is not configured to do so, you will have to manually provision three 20 GiB volumes before starting this tutorial.

Objectives

After this tutorial, you will know the following.

  • How to deploy a ZooKeeper ensemble using StatefulSet.
  • How to consistently configure the ensemble.
  • How to spread the deployment of ZooKeeper servers in the ensemble.
  • How to use PodDisruptionBudgets to ensure service availability during planned maintenance.

ZooKeeper

Apache ZooKeeper is a distributed, open-source coordination service for distributed applications. ZooKeeper allows you to read, write, and observe updates to data. Data are organized in a file system like hierarchy and replicated to all ZooKeeper servers in the ensemble (a set of ZooKeeper servers). All operations on data are atomic and sequentially consistent. ZooKeeper ensures this by using the Zab consensus protocol to replicate a state machine across all servers in the ensemble.

The ensemble uses the Zab protocol to elect a leader, and the ensemble cannot write data until that election is complete. Once complete, the ensemble uses Zab to ensure that it replicates all writes to a quorum before it acknowledges and makes them visible to clients. Without respect to weighted quorums, a quorum is a majority component of the ensemble containing the current leader. For instance, if the ensemble has three servers, a component that contains the leader and one other server constitutes a quorum. If the ensemble can not achieve a quorum, the ensemble cannot write data.

ZooKeeper servers keep their entire state machine in memory, and write every mutation to a durable WAL (Write Ahead Log) on storage media. When a server crashes, it can recover its previous state by replaying the WAL. To prevent the WAL from growing without bound, ZooKeeper servers will periodically snapshot them in memory state to storage media. These snapshots can be loaded directly into memory, and all WAL entries that preceded the snapshot may be discarded.

Creating a ZooKeeper ensemble

The manifest below contains a Headless Service, a Service, a PodDisruptionBudget, and a StatefulSet.

  1. application/zookeeper/zookeeper.yaml
  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: zk-hs
  5. labels:
  6. app: zk
  7. spec:
  8. ports:
  9. - port: 2888
  10. name: server
  11. - port: 3888
  12. name: leader-election
  13. clusterIP: None
  14. selector:
  15. app: zk
  16. ---
  17. apiVersion: v1
  18. kind: Service
  19. metadata:
  20. name: zk-cs
  21. labels:
  22. app: zk
  23. spec:
  24. ports:
  25. - port: 2181
  26. name: client
  27. selector:
  28. app: zk
  29. ---
  30. apiVersion: policy/v1
  31. kind: PodDisruptionBudget
  32. metadata:
  33. name: zk-pdb
  34. spec:
  35. selector:
  36. matchLabels:
  37. app: zk
  38. maxUnavailable: 1
  39. ---
  40. apiVersion: apps/v1
  41. kind: StatefulSet
  42. metadata:
  43. name: zk
  44. spec:
  45. selector:
  46. matchLabels:
  47. app: zk
  48. serviceName: zk-hs
  49. replicas: 3
  50. updateStrategy:
  51. type: RollingUpdate
  52. podManagementPolicy: OrderedReady
  53. template:
  54. metadata:
  55. labels:
  56. app: zk
  57. spec:
  58. affinity:
  59. podAntiAffinity:
  60. requiredDuringSchedulingIgnoredDuringExecution:
  61. - labelSelector:
  62. matchExpressions:
  63. - key: "app"
  64. operator: In
  65. values:
  66. - zk
  67. topologyKey: "kubernetes.io/hostname"
  68. containers:
  69. - name: kubernetes-zookeeper
  70. imagePullPolicy: Always
  71. image: "registry.k8s.io/kubernetes-zookeeper:1.0-3.4.10"
  72. resources:
  73. requests:
  74. memory: "1Gi"
  75. cpu: "0.5"
  76. ports:
  77. - containerPort: 2181
  78. name: client
  79. - containerPort: 2888
  80. name: server
  81. - containerPort: 3888
  82. name: leader-election
  83. command:
  84. - sh
  85. - -c
  86. - "start-zookeeper \
  87. --servers=3 \
  88. --data_dir=/var/lib/zookeeper/data \
  89. --data_log_dir=/var/lib/zookeeper/data/log \
  90. --conf_dir=/opt/zookeeper/conf \
  91. --client_port=2181 \
  92. --election_port=3888 \
  93. --server_port=2888 \
  94. --tick_time=2000 \
  95. --init_limit=10 \
  96. --sync_limit=5 \
  97. --heap=512M \
  98. --max_client_cnxns=60 \
  99. --snap_retain_count=3 \
  100. --purge_interval=12 \
  101. --max_session_timeout=40000 \
  102. --min_session_timeout=4000 \
  103. --log_level=INFO"
  104. readinessProbe:
  105. exec:
  106. command:
  107. - sh
  108. - -c
  109. - "zookeeper-ready 2181"
  110. initialDelaySeconds: 10
  111. timeoutSeconds: 5
  112. livenessProbe:
  113. exec:
  114. command:
  115. - sh
  116. - -c
  117. - "zookeeper-ready 2181"
  118. initialDelaySeconds: 10
  119. timeoutSeconds: 5
  120. volumeMounts:
  121. - name: datadir
  122. mountPath: /var/lib/zookeeper
  123. securityContext:
  124. runAsUser: 1000
  125. fsGroup: 1000
  126. volumeClaimTemplates:
  127. - metadata:
  128. name: datadir
  129. spec:
  130. accessModes: [ "ReadWriteOnce" ]
  131. resources:
  132. requests:
  133. storage: 10Gi

Open a terminal, and use the kubectl apply command to create the manifest.

  1. kubectl apply -f https://k8s.io/examples/application/zookeeper/zookeeper.yaml

This creates the zk-hs Headless Service, the zk-cs Service, the zk-pdb PodDisruptionBudget, and the zk StatefulSet.

  1. service/zk-hs created
  2. service/zk-cs created
  3. poddisruptionbudget.policy/zk-pdb created
  4. statefulset.apps/zk created

Use kubectl get to watch the StatefulSet controller create the StatefulSet’s Pods.

  1. kubectl get pods -w -l app=zk

Once the zk-2 Pod is Running and Ready, use CTRL-C to terminate kubectl.

  1. NAME READY STATUS RESTARTS AGE
  2. zk-0 0/1 Pending 0 0s
  3. zk-0 0/1 Pending 0 0s
  4. zk-0 0/1 ContainerCreating 0 0s
  5. zk-0 0/1 Running 0 19s
  6. zk-0 1/1 Running 0 40s
  7. zk-1 0/1 Pending 0 0s
  8. zk-1 0/1 Pending 0 0s
  9. zk-1 0/1 ContainerCreating 0 0s
  10. zk-1 0/1 Running 0 18s
  11. zk-1 1/1 Running 0 40s
  12. zk-2 0/1 Pending 0 0s
  13. zk-2 0/1 Pending 0 0s
  14. zk-2 0/1 ContainerCreating 0 0s
  15. zk-2 0/1 Running 0 19s
  16. zk-2 1/1 Running 0 40s

The StatefulSet controller creates three Pods, and each Pod has a container with a ZooKeeper server.

Facilitating leader election

Because there is no terminating algorithm for electing a leader in an anonymous network, Zab requires explicit membership configuration to perform leader election. Each server in the ensemble needs to have a unique identifier, all servers need to know the global set of identifiers, and each identifier needs to be associated with a network address.

Use kubectl exec to get the hostnames of the Pods in the zk StatefulSet.

  1. for i in 0 1 2; do kubectl exec zk-$i -- hostname; done

The StatefulSet controller provides each Pod with a unique hostname based on its ordinal index. The hostnames take the form of <statefulset name>-<ordinal index>. Because the replicas field of the zk StatefulSet is set to 3, the Set’s controller creates three Pods with their hostnames set to zk-0, zk-1, and zk-2.

  1. zk-0
  2. zk-1
  3. zk-2

The servers in a ZooKeeper ensemble use natural numbers as unique identifiers, and store each server’s identifier in a file called myid in the server’s data directory.

To examine the contents of the myid file for each server use the following command.

  1. for i in 0 1 2; do echo "myid zk-$i";kubectl exec zk-$i -- cat /var/lib/zookeeper/data/myid; done

Because the identifiers are natural numbers and the ordinal indices are non-negative integers, you can generate an identifier by adding 1 to the ordinal.

  1. myid zk-0
  2. 1
  3. myid zk-1
  4. 2
  5. myid zk-2
  6. 3

To get the Fully Qualified Domain Name (FQDN) of each Pod in the zk StatefulSet use the following command.

  1. for i in 0 1 2; do kubectl exec zk-$i -- hostname -f; done

The zk-hs Service creates a domain for all of the Pods, zk-hs.default.svc.cluster.local.

  1. zk-0.zk-hs.default.svc.cluster.local
  2. zk-1.zk-hs.default.svc.cluster.local
  3. zk-2.zk-hs.default.svc.cluster.local

The A records in Kubernetes DNS resolve the FQDNs to the Pods’ IP addresses. If Kubernetes reschedules the Pods, it will update the A records with the Pods’ new IP addresses, but the A records names will not change.

ZooKeeper stores its application configuration in a file named zoo.cfg. Use kubectl exec to view the contents of the zoo.cfg file in the zk-0 Pod.

  1. kubectl exec zk-0 -- cat /opt/zookeeper/conf/zoo.cfg

In the server.1, server.2, and server.3 properties at the bottom of the file, the 1, 2, and 3 correspond to the identifiers in the ZooKeeper servers’ myid files. They are set to the FQDNs for the Pods in the zk StatefulSet.

  1. clientPort=2181
  2. dataDir=/var/lib/zookeeper/data
  3. dataLogDir=/var/lib/zookeeper/log
  4. tickTime=2000
  5. initLimit=10
  6. syncLimit=2000
  7. maxClientCnxns=60
  8. minSessionTimeout= 4000
  9. maxSessionTimeout= 40000
  10. autopurge.snapRetainCount=3
  11. autopurge.purgeInterval=0
  12. server.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888
  13. server.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888
  14. server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888

Achieving consensus

Consensus protocols require that the identifiers of each participant be unique. No two participants in the Zab protocol should claim the same unique identifier. This is necessary to allow the processes in the system to agree on which processes have committed which data. If two Pods are launched with the same ordinal, two ZooKeeper servers would both identify themselves as the same server.

  1. kubectl get pods -w -l app=zk
  1. NAME READY STATUS RESTARTS AGE
  2. zk-0 0/1 Pending 0 0s
  3. zk-0 0/1 Pending 0 0s
  4. zk-0 0/1 ContainerCreating 0 0s
  5. zk-0 0/1 Running 0 19s
  6. zk-0 1/1 Running 0 40s
  7. zk-1 0/1 Pending 0 0s
  8. zk-1 0/1 Pending 0 0s
  9. zk-1 0/1 ContainerCreating 0 0s
  10. zk-1 0/1 Running 0 18s
  11. zk-1 1/1 Running 0 40s
  12. zk-2 0/1 Pending 0 0s
  13. zk-2 0/1 Pending 0 0s
  14. zk-2 0/1 ContainerCreating 0 0s
  15. zk-2 0/1 Running 0 19s
  16. zk-2 1/1 Running 0 40s

The A records for each Pod are entered when the Pod becomes Ready. Therefore, the FQDNs of the ZooKeeper servers will resolve to a single endpoint, and that endpoint will be the unique ZooKeeper server claiming the identity configured in its myid file.

  1. zk-0.zk-hs.default.svc.cluster.local
  2. zk-1.zk-hs.default.svc.cluster.local
  3. zk-2.zk-hs.default.svc.cluster.local

This ensures that the servers properties in the ZooKeepers’ zoo.cfg files represents a correctly configured ensemble.

  1. server.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888
  2. server.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888
  3. server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888

When the servers use the Zab protocol to attempt to commit a value, they will either achieve consensus and commit the value (if leader election has succeeded and at least two of the Pods are Running and Ready), or they will fail to do so (if either of the conditions are not met). No state will arise where one server acknowledges a write on behalf of another.

Sanity testing the ensemble

The most basic sanity test is to write data to one ZooKeeper server and to read the data from another.

The command below executes the zkCli.sh script to write world to the path /hello on the zk-0 Pod in the ensemble.

  1. kubectl exec zk-0 -- zkCli.sh create /hello world
  1. WATCHER::
  2. WatchedEvent state:SyncConnected type:None path:null
  3. Created /hello

To get the data from the zk-1 Pod use the following command.

  1. kubectl exec zk-1 -- zkCli.sh get /hello

The data that you created on zk-0 is available on all the servers in the ensemble.

  1. WATCHER::
  2. WatchedEvent state:SyncConnected type:None path:null
  3. world
  4. cZxid = 0x100000002
  5. ctime = Thu Dec 08 15:13:30 UTC 2016
  6. mZxid = 0x100000002
  7. mtime = Thu Dec 08 15:13:30 UTC 2016
  8. pZxid = 0x100000002
  9. cversion = 0
  10. dataVersion = 0
  11. aclVersion = 0
  12. ephemeralOwner = 0x0
  13. dataLength = 5
  14. numChildren = 0

Providing durable storage

As mentioned in the ZooKeeper Basics section, ZooKeeper commits all entries to a durable WAL, and periodically writes snapshots in memory state, to storage media. Using WALs to provide durability is a common technique for applications that use consensus protocols to achieve a replicated state machine.

Use the kubectl delete command to delete the zk StatefulSet.

  1. kubectl delete statefulset zk
  1. statefulset.apps "zk" deleted

Watch the termination of the Pods in the StatefulSet.

  1. kubectl get pods -w -l app=zk

When zk-0 if fully terminated, use CTRL-C to terminate kubectl.

  1. zk-2 1/1 Terminating 0 9m
  2. zk-0 1/1 Terminating 0 11m
  3. zk-1 1/1 Terminating 0 10m
  4. zk-2 0/1 Terminating 0 9m
  5. zk-2 0/1 Terminating 0 9m
  6. zk-2 0/1 Terminating 0 9m
  7. zk-1 0/1 Terminating 0 10m
  8. zk-1 0/1 Terminating 0 10m
  9. zk-1 0/1 Terminating 0 10m
  10. zk-0 0/1 Terminating 0 11m
  11. zk-0 0/1 Terminating 0 11m
  12. zk-0 0/1 Terminating 0 11m

Reapply the manifest in zookeeper.yaml.

  1. kubectl apply -f https://k8s.io/examples/application/zookeeper/zookeeper.yaml

This creates the zk StatefulSet object, but the other API objects in the manifest are not modified because they already exist.

Watch the StatefulSet controller recreate the StatefulSet’s Pods.

  1. kubectl get pods -w -l app=zk

Once the zk-2 Pod is Running and Ready, use CTRL-C to terminate kubectl.

  1. NAME READY STATUS RESTARTS AGE
  2. zk-0 0/1 Pending 0 0s
  3. zk-0 0/1 Pending 0 0s
  4. zk-0 0/1 ContainerCreating 0 0s
  5. zk-0 0/1 Running 0 19s
  6. zk-0 1/1 Running 0 40s
  7. zk-1 0/1 Pending 0 0s
  8. zk-1 0/1 Pending 0 0s
  9. zk-1 0/1 ContainerCreating 0 0s
  10. zk-1 0/1 Running 0 18s
  11. zk-1 1/1 Running 0 40s
  12. zk-2 0/1 Pending 0 0s
  13. zk-2 0/1 Pending 0 0s
  14. zk-2 0/1 ContainerCreating 0 0s
  15. zk-2 0/1 Running 0 19s
  16. zk-2 1/1 Running 0 40s

Use the command below to get the value you entered during the sanity test, from the zk-2 Pod.

  1. kubectl exec zk-2 zkCli.sh get /hello

Even though you terminated and recreated all of the Pods in the zk StatefulSet, the ensemble still serves the original value.

  1. WATCHER::
  2. WatchedEvent state:SyncConnected type:None path:null
  3. world
  4. cZxid = 0x100000002
  5. ctime = Thu Dec 08 15:13:30 UTC 2016
  6. mZxid = 0x100000002
  7. mtime = Thu Dec 08 15:13:30 UTC 2016
  8. pZxid = 0x100000002
  9. cversion = 0
  10. dataVersion = 0
  11. aclVersion = 0
  12. ephemeralOwner = 0x0
  13. dataLength = 5
  14. numChildren = 0

The volumeClaimTemplates field of the zk StatefulSet’s spec specifies a PersistentVolume provisioned for each Pod.

  1. volumeClaimTemplates:
  2. - metadata:
  3. name: datadir
  4. annotations:
  5. volume.alpha.kubernetes.io/storage-class: anything
  6. spec:
  7. accessModes: [ "ReadWriteOnce" ]
  8. resources:
  9. requests:
  10. storage: 20Gi

The StatefulSet controller generates a PersistentVolumeClaim for each Pod in the StatefulSet.

Use the following command to get the StatefulSet‘s PersistentVolumeClaims.

  1. kubectl get pvc -l app=zk

When the StatefulSet recreated its Pods, it remounts the Pods’ PersistentVolumes.

  1. NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
  2. datadir-zk-0 Bound pvc-bed742cd-bcb1-11e6-994f-42010a800002 20Gi RWO 1h
  3. datadir-zk-1 Bound pvc-bedd27d2-bcb1-11e6-994f-42010a800002 20Gi RWO 1h
  4. datadir-zk-2 Bound pvc-bee0817e-bcb1-11e6-994f-42010a800002 20Gi RWO 1h

The volumeMounts section of the StatefulSet‘s container template mounts the PersistentVolumes in the ZooKeeper servers’ data directories.

  1. volumeMounts:
  2. - name: datadir
  3. mountPath: /var/lib/zookeeper

When a Pod in the zk StatefulSet is (re)scheduled, it will always have the same PersistentVolume mounted to the ZooKeeper server’s data directory. Even when the Pods are rescheduled, all the writes made to the ZooKeeper servers’ WALs, and all their snapshots, remain durable.

Ensuring consistent configuration

As noted in the Facilitating Leader Election and Achieving Consensus sections, the servers in a ZooKeeper ensemble require consistent configuration to elect a leader and form a quorum. They also require consistent configuration of the Zab protocol in order for the protocol to work correctly over a network. In our example we achieve consistent configuration by embedding the configuration directly into the manifest.

Get the zk StatefulSet.

  1. kubectl get sts zk -o yaml
  1. command:
  2. - sh
  3. - -c
  4. - "start-zookeeper \
  5. --servers=3 \
  6. --data_dir=/var/lib/zookeeper/data \
  7. --data_log_dir=/var/lib/zookeeper/data/log \
  8. --conf_dir=/opt/zookeeper/conf \
  9. --client_port=2181 \
  10. --election_port=3888 \
  11. --server_port=2888 \
  12. --tick_time=2000 \
  13. --init_limit=10 \
  14. --sync_limit=5 \
  15. --heap=512M \
  16. --max_client_cnxns=60 \
  17. --snap_retain_count=3 \
  18. --purge_interval=12 \
  19. --max_session_timeout=40000 \
  20. --min_session_timeout=4000 \
  21. --log_level=INFO"

The command used to start the ZooKeeper servers passed the configuration as command line parameter. You can also use environment variables to pass configuration to the ensemble.

Configuring logging

One of the files generated by the zkGenConfig.sh script controls ZooKeeper’s logging. ZooKeeper uses Log4j, and, by default, it uses a time and size based rolling file appender for its logging configuration.

Use the command below to get the logging configuration from one of Pods in the zk StatefulSet.

  1. kubectl exec zk-0 cat /usr/etc/zookeeper/log4j.properties

The logging configuration below will cause the ZooKeeper process to write all of its logs to the standard output file stream.

  1. zookeeper.root.logger=CONSOLE
  2. zookeeper.console.threshold=INFO
  3. log4j.rootLogger=${zookeeper.root.logger}
  4. log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
  5. log4j.appender.CONSOLE.Threshold=${zookeeper.console.threshold}
  6. log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
  7. log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n

This is the simplest possible way to safely log inside the container. Because the applications write logs to standard out, Kubernetes will handle log rotation for you. Kubernetes also implements a sane retention policy that ensures application logs written to standard out and standard error do not exhaust local storage media.

Use kubectl logs to retrieve the last 20 log lines from one of the Pods.

  1. kubectl logs zk-0 --tail 20

You can view application logs written to standard out or standard error using kubectl logs and from the Kubernetes Dashboard.

  1. 2016-12-06 19:34:16,236 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52740
  2. 2016-12-06 19:34:16,237 [myid:1] - INFO [Thread-1136:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52740 (no session established for client)
  3. 2016-12-06 19:34:26,155 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52749
  4. 2016-12-06 19:34:26,155 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52749
  5. 2016-12-06 19:34:26,156 [myid:1] - INFO [Thread-1137:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52749 (no session established for client)
  6. 2016-12-06 19:34:26,222 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52750
  7. 2016-12-06 19:34:26,222 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52750
  8. 2016-12-06 19:34:26,226 [myid:1] - INFO [Thread-1138:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52750 (no session established for client)
  9. 2016-12-06 19:34:36,151 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52760
  10. 2016-12-06 19:34:36,152 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52760
  11. 2016-12-06 19:34:36,152 [myid:1] - INFO [Thread-1139:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52760 (no session established for client)
  12. 2016-12-06 19:34:36,230 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52761
  13. 2016-12-06 19:34:36,231 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52761
  14. 2016-12-06 19:34:36,231 [myid:1] - INFO [Thread-1140:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52761 (no session established for client)
  15. 2016-12-06 19:34:46,149 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52767
  16. 2016-12-06 19:34:46,149 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52767
  17. 2016-12-06 19:34:46,149 [myid:1] - INFO [Thread-1141:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52767 (no session established for client)
  18. 2016-12-06 19:34:46,230 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52768
  19. 2016-12-06 19:34:46,230 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52768
  20. 2016-12-06 19:34:46,230 [myid:1] - INFO [Thread-1142:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52768 (no session established for client)

Kubernetes integrates with many logging solutions. You can choose a logging solution that best fits your cluster and applications. For cluster-level logging and aggregation, consider deploying a sidecar container to rotate and ship your logs.

Configuring a non-privileged user

The best practices to allow an application to run as a privileged user inside of a container are a matter of debate. If your organization requires that applications run as a non-privileged user you can use a SecurityContext to control the user that the entry point runs as.

The zk StatefulSet‘s Pod template contains a SecurityContext.

  1. securityContext:
  2. runAsUser: 1000
  3. fsGroup: 1000

In the Pods’ containers, UID 1000 corresponds to the zookeeper user and GID 1000 corresponds to the zookeeper group.

Get the ZooKeeper process information from the zk-0 Pod.

  1. kubectl exec zk-0 -- ps -elf

As the runAsUser field of the securityContext object is set to 1000, instead of running as root, the ZooKeeper process runs as the zookeeper user.

  1. F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD
  2. 4 S zookeep+ 1 0 0 80 0 - 1127 - 20:46 ? 00:00:00 sh -c zkGenConfig.sh && zkServer.sh start-foreground
  3. 0 S zookeep+ 27 1 0 80 0 - 1155556 - 20:46 ? 00:00:19 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.root.logger=INFO,CONSOLE -cp /usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.9.jar:/usr/bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper: -Xmx2G -Xms2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/bin/../etc/zookeeper/zoo.cfg

By default, when the Pod’s PersistentVolumes is mounted to the ZooKeeper server’s data directory, it is only accessible by the root user. This configuration prevents the ZooKeeper process from writing to its WAL and storing its snapshots.

Use the command below to get the file permissions of the ZooKeeper data directory on the zk-0 Pod.

  1. kubectl exec -ti zk-0 -- ls -ld /var/lib/zookeeper/data

Because the fsGroup field of the securityContext object is set to 1000, the ownership of the Pods’ PersistentVolumes is set to the zookeeper group, and the ZooKeeper process is able to read and write its data.

  1. drwxr-sr-x 3 zookeeper zookeeper 4096 Dec 5 20:45 /var/lib/zookeeper/data

Managing the ZooKeeper process

The ZooKeeper documentation mentions that “You will want to have a supervisory process that manages each of your ZooKeeper server processes (JVM).” Utilizing a watchdog (supervisory process) to restart failed processes in a distributed system is a common pattern. When deploying an application in Kubernetes, rather than using an external utility as a supervisory process, you should use Kubernetes as the watchdog for your application.

Updating the ensemble

The zk StatefulSet is configured to use the RollingUpdate update strategy.

You can use kubectl patch to update the number of cpus allocated to the servers.

  1. kubectl patch sts zk --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/resources/requests/cpu", "value":"0.3"}]'
  1. statefulset.apps/zk patched

Use kubectl rollout status to watch the status of the update.

  1. kubectl rollout status sts/zk
  1. waiting for statefulset rolling update to complete 0 pods at revision zk-5db4499664...
  2. Waiting for 1 pods to be ready...
  3. Waiting for 1 pods to be ready...
  4. waiting for statefulset rolling update to complete 1 pods at revision zk-5db4499664...
  5. Waiting for 1 pods to be ready...
  6. Waiting for 1 pods to be ready...
  7. waiting for statefulset rolling update to complete 2 pods at revision zk-5db4499664...
  8. Waiting for 1 pods to be ready...
  9. Waiting for 1 pods to be ready...
  10. statefulset rolling update complete 3 pods at revision zk-5db4499664...

This terminates the Pods, one at a time, in reverse ordinal order, and recreates them with the new configuration. This ensures that quorum is maintained during a rolling update.

Use the kubectl rollout history command to view a history or previous configurations.

  1. kubectl rollout history sts/zk

The output is similar to this:

  1. statefulsets "zk"
  2. REVISION
  3. 1
  4. 2

Use the kubectl rollout undo command to roll back the modification.

  1. kubectl rollout undo sts/zk

The output is similar to this:

  1. statefulset.apps/zk rolled back

Handling process failure

Restart Policies control how Kubernetes handles process failures for the entry point of the container in a Pod. For Pods in a StatefulSet, the only appropriate RestartPolicy is Always, and this is the default value. For stateful applications you should never override the default policy.

Use the following command to examine the process tree for the ZooKeeper server running in the zk-0 Pod.

  1. kubectl exec zk-0 -- ps -ef

The command used as the container’s entry point has PID 1, and the ZooKeeper process, a child of the entry point, has PID 27.

  1. UID PID PPID C STIME TTY TIME CMD
  2. zookeep+ 1 0 0 15:03 ? 00:00:00 sh -c zkGenConfig.sh && zkServer.sh start-foreground
  3. zookeep+ 27 1 0 15:03 ? 00:00:03 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.root.logger=INFO,CONSOLE -cp /usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.9.jar:/usr/bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper: -Xmx2G -Xms2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/bin/../etc/zookeeper/zoo.cfg

In another terminal watch the Pods in the zk StatefulSet with the following command.

  1. kubectl get pod -w -l app=zk

In another terminal, terminate the ZooKeeper process in Pod zk-0 with the following command.

  1. kubectl exec zk-0 -- pkill java

The termination of the ZooKeeper process caused its parent process to terminate. Because the RestartPolicy of the container is Always, it restarted the parent process.

  1. NAME READY STATUS RESTARTS AGE
  2. zk-0 1/1 Running 0 21m
  3. zk-1 1/1 Running 0 20m
  4. zk-2 1/1 Running 0 19m
  5. NAME READY STATUS RESTARTS AGE
  6. zk-0 0/1 Error 0 29m
  7. zk-0 0/1 Running 1 29m
  8. zk-0 1/1 Running 1 29m

If your application uses a script (such as zkServer.sh) to launch the process that implements the application’s business logic, the script must terminate with the child process. This ensures that Kubernetes will restart the application’s container when the process implementing the application’s business logic fails.

Testing for liveness

Configuring your application to restart failed processes is not enough to keep a distributed system healthy. There are scenarios where a system’s processes can be both alive and unresponsive, or otherwise unhealthy. You should use liveness probes to notify Kubernetes that your application’s processes are unhealthy and it should restart them.

The Pod template for the zk StatefulSet specifies a liveness probe.

  1. livenessProbe:
  2. exec:
  3. command:
  4. - sh
  5. - -c
  6. - "zookeeper-ready 2181"
  7. initialDelaySeconds: 15
  8. timeoutSeconds: 5

The probe calls a bash script that uses the ZooKeeper ruok four letter word to test the server’s health.

  1. OK=$(echo ruok | nc 127.0.0.1 $1)
  2. if [ "$OK" == "imok" ]; then
  3. exit 0
  4. else
  5. exit 1
  6. fi

In one terminal window, use the following command to watch the Pods in the zk StatefulSet.

  1. kubectl get pod -w -l app=zk

In another window, using the following command to delete the zookeeper-ready script from the file system of Pod zk-0.

  1. kubectl exec zk-0 -- rm /opt/zookeeper/bin/zookeeper-ready

When the liveness probe for the ZooKeeper process fails, Kubernetes will automatically restart the process for you, ensuring that unhealthy processes in the ensemble are restarted.

  1. kubectl get pod -w -l app=zk
  1. NAME READY STATUS RESTARTS AGE
  2. zk-0 1/1 Running 0 1h
  3. zk-1 1/1 Running 0 1h
  4. zk-2 1/1 Running 0 1h
  5. NAME READY STATUS RESTARTS AGE
  6. zk-0 0/1 Running 0 1h
  7. zk-0 0/1 Running 1 1h
  8. zk-0 1/1 Running 1 1h

Testing for readiness

Readiness is not the same as liveness. If a process is alive, it is scheduled and healthy. If a process is ready, it is able to process input. Liveness is a necessary, but not sufficient, condition for readiness. There are cases, particularly during initialization and termination, when a process can be alive but not ready.

If you specify a readiness probe, Kubernetes will ensure that your application’s processes will not receive network traffic until their readiness checks pass.

For a ZooKeeper server, liveness implies readiness. Therefore, the readiness probe from the zookeeper.yaml manifest is identical to the liveness probe.

  1. readinessProbe:
  2. exec:
  3. command:
  4. - sh
  5. - -c
  6. - "zookeeper-ready 2181"
  7. initialDelaySeconds: 15
  8. timeoutSeconds: 5

Even though the liveness and readiness probes are identical, it is important to specify both. This ensures that only healthy servers in the ZooKeeper ensemble receive network traffic.

Tolerating Node failure

ZooKeeper needs a quorum of servers to successfully commit mutations to data. For a three server ensemble, two servers must be healthy for writes to succeed. In quorum based systems, members are deployed across failure domains to ensure availability. To avoid an outage, due to the loss of an individual machine, best practices preclude co-locating multiple instances of the application on the same machine.

By default, Kubernetes may co-locate Pods in a StatefulSet on the same node. For the three server ensemble you created, if two servers are on the same node, and that node fails, the clients of your ZooKeeper service will experience an outage until at least one of the Pods can be rescheduled.

You should always provision additional capacity to allow the processes of critical systems to be rescheduled in the event of node failures. If you do so, then the outage will only last until the Kubernetes scheduler reschedules one of the ZooKeeper servers. However, if you want your service to tolerate node failures with no downtime, you should set podAntiAffinity.

Use the command below to get the nodes for Pods in the zk StatefulSet.

  1. for i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo ""; done

All of the Pods in the zk StatefulSet are deployed on different nodes.

  1. kubernetes-node-cxpk
  2. kubernetes-node-a5aq
  3. kubernetes-node-2g2d

This is because the Pods in the zk StatefulSet have a PodAntiAffinity specified.

  1. affinity:
  2. podAntiAffinity:
  3. requiredDuringSchedulingIgnoredDuringExecution:
  4. - labelSelector:
  5. matchExpressions:
  6. - key: "app"
  7. operator: In
  8. values:
  9. - zk
  10. topologyKey: "kubernetes.io/hostname"

The requiredDuringSchedulingIgnoredDuringExecution field tells the Kubernetes Scheduler that it should never co-locate two Pods which have app label as zk in the domain defined by the topologyKey. The topologyKey kubernetes.io/hostname indicates that the domain is an individual node. Using different rules, labels, and selectors, you can extend this technique to spread your ensemble across physical, network, and power failure domains.

Surviving maintenance

In this section you will cordon and drain nodes. If you are using this tutorial on a shared cluster, be sure that this will not adversely affect other tenants.

The previous section showed you how to spread your Pods across nodes to survive unplanned node failures, but you also need to plan for temporary node failures that occur due to planned maintenance.

Use this command to get the nodes in your cluster.

  1. kubectl get nodes

This tutorial assumes a cluster with at least four nodes. If the cluster has more than four, use kubectl cordon to cordon all but four nodes. Constraining to four nodes will ensure Kubernetes encounters affinity and PodDisruptionBudget constraints when scheduling zookeeper Pods in the following maintenance simulation.

  1. kubectl cordon <node-name>

Use this command to get the zk-pdb PodDisruptionBudget.

  1. kubectl get pdb zk-pdb

The max-unavailable field indicates to Kubernetes that at most one Pod from zk StatefulSet can be unavailable at any time.

  1. NAME MIN-AVAILABLE MAX-UNAVAILABLE ALLOWED-DISRUPTIONS AGE
  2. zk-pdb N/A 1 1

In one terminal, use this command to watch the Pods in the zk StatefulSet.

  1. kubectl get pods -w -l app=zk

In another terminal, use this command to get the nodes that the Pods are currently scheduled on.

  1. for i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo ""; done

The output is similar to this:

  1. kubernetes-node-pb41
  2. kubernetes-node-ixsl
  3. kubernetes-node-i4c4

Use kubectl drain to cordon and drain the node on which the zk-0 Pod is scheduled.

  1. kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data

The output is similar to this:

  1. node "kubernetes-node-pb41" cordoned
  2. WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-pb41, kube-proxy-kubernetes-node-pb41; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-o5elz
  3. pod "zk-0" deleted
  4. node "kubernetes-node-pb41" drained

As there are four nodes in your cluster, kubectl drain, succeeds and the zk-0 is rescheduled to another node.

  1. NAME READY STATUS RESTARTS AGE
  2. zk-0 1/1 Running 2 1h
  3. zk-1 1/1 Running 0 1h
  4. zk-2 1/1 Running 0 1h
  5. NAME READY STATUS RESTARTS AGE
  6. zk-0 1/1 Terminating 2 2h
  7. zk-0 0/1 Terminating 2 2h
  8. zk-0 0/1 Terminating 2 2h
  9. zk-0 0/1 Terminating 2 2h
  10. zk-0 0/1 Pending 0 0s
  11. zk-0 0/1 Pending 0 0s
  12. zk-0 0/1 ContainerCreating 0 0s
  13. zk-0 0/1 Running 0 51s
  14. zk-0 1/1 Running 0 1m

Keep watching the StatefulSet‘s Pods in the first terminal and drain the node on which zk-1 is scheduled.

  1. kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data

The output is similar to this:

  1. "kubernetes-node-ixsl" cordoned
  2. WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-ixsl, kube-proxy-kubernetes-node-ixsl; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-voc74
  3. pod "zk-1" deleted
  4. node "kubernetes-node-ixsl" drained

The zk-1 Pod cannot be scheduled because the zk StatefulSet contains a PodAntiAffinity rule preventing co-location of the Pods, and as only two nodes are schedulable, the Pod will remain in a Pending state.

  1. kubectl get pods -w -l app=zk

The output is similar to this:

  1. NAME READY STATUS RESTARTS AGE
  2. zk-0 1/1 Running 2 1h
  3. zk-1 1/1 Running 0 1h
  4. zk-2 1/1 Running 0 1h
  5. NAME READY STATUS RESTARTS AGE
  6. zk-0 1/1 Terminating 2 2h
  7. zk-0 0/1 Terminating 2 2h
  8. zk-0 0/1 Terminating 2 2h
  9. zk-0 0/1 Terminating 2 2h
  10. zk-0 0/1 Pending 0 0s
  11. zk-0 0/1 Pending 0 0s
  12. zk-0 0/1 ContainerCreating 0 0s
  13. zk-0 0/1 Running 0 51s
  14. zk-0 1/1 Running 0 1m
  15. zk-1 1/1 Terminating 0 2h
  16. zk-1 0/1 Terminating 0 2h
  17. zk-1 0/1 Terminating 0 2h
  18. zk-1 0/1 Terminating 0 2h
  19. zk-1 0/1 Pending 0 0s
  20. zk-1 0/1 Pending 0 0s

Continue to watch the Pods of the StatefulSet, and drain the node on which zk-2 is scheduled.

  1. kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data

The output is similar to this:

  1. node "kubernetes-node-i4c4" cordoned
  2. WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog
  3. WARNING: Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog; Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4
  4. There are pending pods when an error occurred: Cannot evict pod as it would violate the pod's disruption budget.
  5. pod/zk-2

Use CTRL-C to terminate kubectl.

You cannot drain the third node because evicting zk-2 would violate zk-budget. However, the node will remain cordoned.

Use zkCli.sh to retrieve the value you entered during the sanity test from zk-0.

  1. kubectl exec zk-0 zkCli.sh get /hello

The service is still available because its PodDisruptionBudget is respected.

  1. WatchedEvent state:SyncConnected type:None path:null
  2. world
  3. cZxid = 0x200000002
  4. ctime = Wed Dec 07 00:08:59 UTC 2016
  5. mZxid = 0x200000002
  6. mtime = Wed Dec 07 00:08:59 UTC 2016
  7. pZxid = 0x200000002
  8. cversion = 0
  9. dataVersion = 0
  10. aclVersion = 0
  11. ephemeralOwner = 0x0
  12. dataLength = 5
  13. numChildren = 0

Use kubectl uncordon to uncordon the first node.

  1. kubectl uncordon kubernetes-node-pb41

The output is similar to this:

  1. node "kubernetes-node-pb41" uncordoned

zk-1 is rescheduled on this node. Wait until zk-1 is Running and Ready.

  1. kubectl get pods -w -l app=zk

The output is similar to this:

  1. NAME READY STATUS RESTARTS AGE
  2. zk-0 1/1 Running 2 1h
  3. zk-1 1/1 Running 0 1h
  4. zk-2 1/1 Running 0 1h
  5. NAME READY STATUS RESTARTS AGE
  6. zk-0 1/1 Terminating 2 2h
  7. zk-0 0/1 Terminating 2 2h
  8. zk-0 0/1 Terminating 2 2h
  9. zk-0 0/1 Terminating 2 2h
  10. zk-0 0/1 Pending 0 0s
  11. zk-0 0/1 Pending 0 0s
  12. zk-0 0/1 ContainerCreating 0 0s
  13. zk-0 0/1 Running 0 51s
  14. zk-0 1/1 Running 0 1m
  15. zk-1 1/1 Terminating 0 2h
  16. zk-1 0/1 Terminating 0 2h
  17. zk-1 0/1 Terminating 0 2h
  18. zk-1 0/1 Terminating 0 2h
  19. zk-1 0/1 Pending 0 0s
  20. zk-1 0/1 Pending 0 0s
  21. zk-1 0/1 Pending 0 12m
  22. zk-1 0/1 ContainerCreating 0 12m
  23. zk-1 0/1 Running 0 13m
  24. zk-1 1/1 Running 0 13m

Attempt to drain the node on which zk-2 is scheduled.

  1. kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data

The output is similar to this:

  1. node "kubernetes-node-i4c4" already cordoned
  2. WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog
  3. pod "heapster-v1.2.0-2604621511-wht1r" deleted
  4. pod "zk-2" deleted
  5. node "kubernetes-node-i4c4" drained

This time kubectl drain succeeds.

Uncordon the second node to allow zk-2 to be rescheduled.

  1. kubectl uncordon kubernetes-node-ixsl

The output is similar to this:

  1. node "kubernetes-node-ixsl" uncordoned

You can use kubectl drain in conjunction with PodDisruptionBudgets to ensure that your services remain available during maintenance. If drain is used to cordon nodes and evict pods prior to taking the node offline for maintenance, services that express a disruption budget will have that budget respected. You should always allocate additional capacity for critical services so that their Pods can be immediately rescheduled.

Cleaning up

  • Use kubectl uncordon to uncordon all the nodes in your cluster.
  • You must delete the persistent storage media for the PersistentVolumes used in this tutorial. Follow the necessary steps, based on your environment, storage configuration, and provisioning method, to ensure that all storage is reclaimed.