Install calico/node

calico/node runs three daemons

  • Felix, the Calico per-node daemon
  • BIRD, a daemon that speaks the BGP protocol to distribute routing information to other nodes
  • confd, a daemon that watches the Calico datastore for config changes and updates BIRD’s config files

In this lab we configure and install calico/node as a daemon set.

Provision Certificates

Create the key calico/node will use to authenticate with Typha and the certificate signing request (CSR)

  1. openssl req -newkey rsa:4096 \
  2. -keyout calico-node.key \
  3. -nodes \
  4. -out calico-node.csr \
  5. -subj "/CN=calico-node"

The certificate presents the Common Name (CN) as calico-node, which is what we configured Typha to accept in the last lab.

Sign the Felix certificate with the CA we created earlier

  1. openssl x509 -req -in calico-node.csr \
  2. -CA typhaca.crt \
  3. -CAkey typhaca.key \
  4. -CAcreateserial \
  5. -out calico-node.crt \
  6. -days 365

Store the key and certificate in a Secret that calico/node will access

  1. kubectl create secret generic -n kube-system calico-node-certs --from-file=calico-node.key --from-file=calico-node.crt

Validate the key and certificate that calico/node will use to access Typha using TLS:

  1. curl https://calico-typha:5473 -v --cacert typhaca.crt --resolve calico-typha:5473:$TYPHA_CLUSTERIP --cert calico-node.crt --key calico-node.key

Result

  1. * Added calico-typha:5473:10.103.120.116 to DNS cache
  2. * Hostname calico-typha was found in DNS cache
  3. * Trying 10.103.120.116:5473...
  4. * Connected to calico-typha (10.103.120.116) port 5473 (#0)
  5. * ALPN, offering h2
  6. * ALPN, offering http/1.1
  7. * CAfile: typhaca.crt
  8. * CApath: /etc/ssl/certs
  9. * TLSv1.0 (OUT), TLS header, Certificate Status (22):
  10. * TLSv1.3 (OUT), TLS handshake, Client hello (1):
  11. * TLSv1.2 (IN), TLS header, Certificate Status (22):
  12. * TLSv1.3 (IN), TLS handshake, Server hello (2):
  13. * TLSv1.2 (IN), TLS header, Certificate Status (22):
  14. * TLSv1.2 (IN), TLS handshake, Certificate (11):
  15. * TLSv1.2 (IN), TLS header, Certificate Status (22):
  16. * TLSv1.2 (IN), TLS handshake, Server key exchange (12):
  17. * TLSv1.2 (IN), TLS header, Certificate Status (22):
  18. * TLSv1.2 (IN), TLS handshake, Request CERT (13):
  19. * TLSv1.2 (IN), TLS header, Certificate Status (22):
  20. * TLSv1.2 (IN), TLS handshake, Server finished (14):
  21. * TLSv1.2 (OUT), TLS header, Certificate Status (22):
  22. * TLSv1.2 (OUT), TLS handshake, Certificate (11):
  23. * TLSv1.2 (OUT), TLS header, Certificate Status (22):
  24. * TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
  25. * TLSv1.2 (OUT), TLS header, Certificate Status (22):
  26. * TLSv1.2 (OUT), TLS handshake, CERT verify (15):
  27. * TLSv1.2 (OUT), TLS header, Finished (20):
  28. * TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
  29. * TLSv1.2 (OUT), TLS header, Certificate Status (22):
  30. * TLSv1.2 (OUT), TLS handshake, Finished (20):
  31. * TLSv1.2 (IN), TLS header, Finished (20):
  32. * TLSv1.2 (IN), TLS header, Certificate Status (22):
  33. * TLSv1.2 (IN), TLS handshake, Finished (20):
  34. * SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
  35. * ALPN, server did not agree to a protocol
  36. * Server certificate:
  37. * subject: CN=calico-typha
  38. * start date: Jan 27 19:35:44 2024 GMT
  39. * expire date: Jan 26 19:35:44 2025 GMT
  40. * common name: calico-typha (matched)
  41. * issuer: CN=Calico Typha CA
  42. * SSL certificate verify ok.
  43. * TLSv1.2 (OUT), TLS header, Supplemental data (23):
  44. > GET / HTTP/1.1
  45. > Host: calico-typha:5473
  46. > User-Agent: curl/7.81.0
  47. > Accept: */*
  48. >
  49. * TLSv1.2 (IN), TLS header, Unknown (21):
  50. * TLSv1.2 (IN), TLS alert, close notify (256):
  51. * Empty reply from server
  52. * Closing connection 0
  53. * TLSv1.2 (OUT), TLS header, Unknown (21):
  54. * TLSv1.2 (OUT), TLS alert, close notify (256):
  55. curl: (52) Empty reply from server

This demonstrates that Typha is presenting its TLS certificate and accepting our connection when we presented with the calico/node certificate and key.

Provision RBAC

Create the ServiceAccount that calico/node will run as

  1. kubectl create serviceaccount -n kube-system calico-node

Provision a cluster role with permissions to read and modify Calico datastore objects

  1. kubectl apply -f - <<EOF
  2. kind: ClusterRole
  3. apiVersion: rbac.authorization.k8s.io/v1
  4. metadata:
  5. name: calico-node
  6. rules:
  7. # The CNI plugin needs to get pods, nodes, and namespaces.
  8. - apiGroups: [""]
  9. resources:
  10. - pods
  11. - nodes
  12. - namespaces
  13. verbs:
  14. - get
  15. # EndpointSlices are used for Service-based network policy rule
  16. # enforcement.
  17. - apiGroups: ["discovery.k8s.io"]
  18. resources:
  19. - endpointslices
  20. verbs:
  21. - watch
  22. - list
  23. - apiGroups: [""]
  24. resources:
  25. - endpoints
  26. - services
  27. verbs:
  28. # Used to discover service IPs for advertisement.
  29. - watch
  30. - list
  31. # Used to discover Typhas.
  32. - get
  33. # Pod CIDR auto-detection on kubeadm needs access to config maps.
  34. - apiGroups: [""]
  35. resources:
  36. - configmaps
  37. verbs:
  38. - get
  39. - apiGroups: [""]
  40. resources:
  41. - nodes/status
  42. verbs:
  43. # Needed for clearing NodeNetworkUnavailable flag.
  44. - patch
  45. # Calico stores some configuration information in node annotations.
  46. - update
  47. # Watch for changes to Kubernetes NetworkPolicies.
  48. - apiGroups: ["networking.k8s.io"]
  49. resources:
  50. - networkpolicies
  51. verbs:
  52. - watch
  53. - list
  54. # Used by Calico for policy information.
  55. - apiGroups: [""]
  56. resources:
  57. - pods
  58. - namespaces
  59. - serviceaccounts
  60. verbs:
  61. - list
  62. - watch
  63. # The CNI plugin patches pods/status.
  64. - apiGroups: [""]
  65. resources:
  66. - pods/status
  67. verbs:
  68. - patch
  69. # Used for creating service account tokens to be used by the CNI plugin
  70. - apiGroups: [""]
  71. resources:
  72. - serviceaccounts/token
  73. resourceNames:
  74. - calico-node
  75. verbs:
  76. - create
  77. # Calico monitors various CRDs for config.
  78. - apiGroups: ["crd.projectcalico.org"]
  79. resources:
  80. - globalfelixconfigs
  81. - felixconfigurations
  82. - bgppeers
  83. - globalbgpconfigs
  84. - bgpconfigurations
  85. - ippools
  86. - ipamblocks
  87. - globalnetworkpolicies
  88. - globalnetworksets
  89. - networkpolicies
  90. - networksets
  91. - clusterinformations
  92. - hostendpoints
  93. - blockaffinities
  94. verbs:
  95. - get
  96. - list
  97. - watch
  98. # Calico must create and update some CRDs on startup.
  99. - apiGroups: ["crd.projectcalico.org"]
  100. resources:
  101. - ippools
  102. - felixconfigurations
  103. - clusterinformations
  104. verbs:
  105. - create
  106. - update
  107. # Calico stores some configuration information on the node.
  108. - apiGroups: [""]
  109. resources:
  110. - nodes
  111. verbs:
  112. - get
  113. - list
  114. - watch
  115. # These permissions are required for Calico CNI to perform IPAM allocations.
  116. - apiGroups: ["crd.projectcalico.org"]
  117. resources:
  118. - blockaffinities
  119. - ipamblocks
  120. - ipamhandles
  121. verbs:
  122. - get
  123. - list
  124. - create
  125. - update
  126. - delete
  127. - apiGroups: ["crd.projectcalico.org"]
  128. resources:
  129. - ipamconfigs
  130. verbs:
  131. - get
  132. # Block affinities must also be watchable by confd for route aggregation.
  133. - apiGroups: ["crd.projectcalico.org"]
  134. resources:
  135. - blockaffinities
  136. verbs:
  137. - watch
  138. EOF

Bind the cluster role to the calico-node ServiceAccount

  1. kubectl create clusterrolebinding calico-node --clusterrole=calico-node --serviceaccount=kube-system:calico-node

Install daemon set

calico/node runs as a daemon set so that it is installed on every node in the cluster.

Create the daemon set

  1. kubectl apply -f - <<EOF
  2. kind: DaemonSet
  3. apiVersion: apps/v1
  4. metadata:
  5. name: calico-node
  6. namespace: kube-system
  7. labels:
  8. k8s-app: calico-node
  9. spec:
  10. selector:
  11. matchLabels:
  12. k8s-app: calico-node
  13. updateStrategy:
  14. type: RollingUpdate
  15. rollingUpdate:
  16. maxUnavailable: 1
  17. template:
  18. metadata:
  19. labels:
  20. k8s-app: calico-node
  21. spec:
  22. nodeSelector:
  23. kubernetes.io/os: linux
  24. hostNetwork: true
  25. tolerations:
  26. # Make sure calico-node gets scheduled on all nodes.
  27. - effect: NoSchedule
  28. operator: Exists
  29. # Mark the pod as a critical add-on for rescheduling.
  30. - key: CriticalAddonsOnly
  31. operator: Exists
  32. - effect: NoExecute
  33. operator: Exists
  34. serviceAccountName: calico-node
  35. # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
  36. # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
  37. terminationGracePeriodSeconds: 0
  38. priorityClassName: system-node-critical
  39. containers:
  40. # Runs calico-node container on each Kubernetes node. This
  41. # container programs network policy and routes on each
  42. # host.
  43. - name: calico-node
  44. image: calico/node:v3.20.0
  45. env:
  46. # Use Kubernetes API as the backing datastore.
  47. - name: DATASTORE_TYPE
  48. value: "kubernetes"
  49. - name: FELIX_TYPHAK8SSERVICENAME
  50. value: calico-typha
  51. # Wait for the datastore.
  52. - name: WAIT_FOR_DATASTORE
  53. value: "true"
  54. # Set based on the k8s node name.
  55. - name: NODENAME
  56. valueFrom:
  57. fieldRef:
  58. fieldPath: spec.nodeName
  59. # Choose the backend to use.
  60. - name: CALICO_NETWORKING_BACKEND
  61. value: bird
  62. # Cluster type to identify the deployment type
  63. - name: CLUSTER_TYPE
  64. value: "k8s,bgp"
  65. # Auto-detect the BGP IP address.
  66. - name: IP
  67. value: "autodetect"
  68. # Disable file logging so kubectl logs works.
  69. - name: CALICO_DISABLE_FILE_LOGGING
  70. value: "true"
  71. # Set Felix endpoint to host default action to ACCEPT.
  72. - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
  73. value: "ACCEPT"
  74. # Disable IPv6 on Kubernetes.
  75. - name: FELIX_IPV6SUPPORT
  76. value: "false"
  77. # Set Felix logging to "info"
  78. - name: FELIX_LOGSEVERITYSCREEN
  79. value: "info"
  80. - name: FELIX_HEALTHENABLED
  81. value: "true"
  82. # Location of the CA bundle Felix uses to authenticate Typha; volume mount
  83. - name: FELIX_TYPHACAFILE
  84. value: /calico-typha-ca/typhaca.crt
  85. # Common name on the Typha certificate; used to verify we are talking to an authentic typha
  86. - name: FELIX_TYPHACN
  87. value: calico-typha
  88. # Location of the client certificate for connecting to Typha; volume mount
  89. - name: FELIX_TYPHACERTFILE
  90. value: /calico-node-certs/calico-node.crt
  91. # Location of the client certificate key for connecting to Typha; volume mount
  92. - name: FELIX_TYPHAKEYFILE
  93. value: /calico-node-certs/calico-node.key
  94. securityContext:
  95. privileged: true
  96. resources:
  97. requests:
  98. cpu: 250m
  99. lifecycle:
  100. preStop:
  101. exec:
  102. command:
  103. - /bin/calico-node
  104. - -shutdown
  105. livenessProbe:
  106. httpGet:
  107. path: /liveness
  108. port: 9099
  109. host: localhost
  110. periodSeconds: 10
  111. initialDelaySeconds: 10
  112. failureThreshold: 6
  113. readinessProbe:
  114. exec:
  115. command:
  116. - /bin/calico-node
  117. - -bird-ready
  118. - -felix-ready
  119. periodSeconds: 10
  120. volumeMounts:
  121. - mountPath: /lib/modules
  122. name: lib-modules
  123. readOnly: true
  124. - mountPath: /run/xtables.lock
  125. name: xtables-lock
  126. readOnly: false
  127. - mountPath: /var/run/calico
  128. name: var-run-calico
  129. readOnly: false
  130. - mountPath: /var/lib/calico
  131. name: var-lib-calico
  132. readOnly: false
  133. - mountPath: /var/run/nodeagent
  134. name: policysync
  135. - mountPath: "/calico-typha-ca"
  136. name: calico-typha-ca
  137. readOnly: true
  138. - mountPath: /calico-node-certs
  139. name: calico-node-certs
  140. readOnly: true
  141. volumes:
  142. # Used by calico-node.
  143. - name: lib-modules
  144. hostPath:
  145. path: /lib/modules
  146. - name: var-run-calico
  147. hostPath:
  148. path: /var/run/calico
  149. - name: var-lib-calico
  150. hostPath:
  151. path: /var/lib/calico
  152. - name: xtables-lock
  153. hostPath:
  154. path: /run/xtables.lock
  155. type: FileOrCreate
  156. # Used to create per-pod Unix Domain Sockets
  157. - name: policysync
  158. hostPath:
  159. type: DirectoryOrCreate
  160. path: /var/run/nodeagent
  161. - name: calico-typha-ca
  162. configMap:
  163. name: calico-typha-ca
  164. - name: calico-node-certs
  165. secret:
  166. secretName: calico-node-certs
  167. EOF

Verify that calico/node is running on each node in your cluster, and goes to Running within a few minutes.

  1. kubectl get pod -l k8s-app=calico-node -n kube-system

Result

  1. NAME READY STATUS RESTARTS AGE
  2. calico-node-99ksc 1/1 Running 0 9m51s
  3. calico-node-cbgxr 1/1 Running 0 9m21s
  4. calico-node-j456w 1/1 Running 0 9m42s
  5. calico-node-rflbk 1/1 Running 0 9m32s
  6. calico-node-xlpkh 1/1 Running 0 9m12s

Next

Configure BGP peering