Install calico/node
calico/node
runs three daemons
- Felix, the Calico per-node daemon
- BIRD, a daemon that speaks the BGP protocol to distribute routing information to other nodes
- confd, a daemon that watches the Calico datastore for config changes and updates BIRD’s config files
In this lab we configure and install calico/node
as a daemon set.
Provision Certificates
Create the key calico/node
will use to authenticate with Typha and the certificate signing request (CSR)
openssl req -newkey rsa:4096 \
-keyout calico-node.key \
-nodes \
-out calico-node.csr \
-subj "/CN=calico-node"
The certificate presents the Common Name (CN) as calico-node
, which is what we configured Typha to accept in the last lab.
Sign the Felix certificate with the CA we created earlier
openssl x509 -req -in calico-node.csr \
-CA typhaca.crt \
-CAkey typhaca.key \
-CAcreateserial \
-out calico-node.crt \
-days 365
Store the key and certificate in a Secret that calico/node
will access
kubectl create secret generic -n kube-system calico-node-certs --from-file=calico-node.key --from-file=calico-node.crt
Validate the key and certificate that calico/node
will use to access Typha using TLS:
curl https://calico-typha:5473 -v --cacert typhaca.crt --resolve calico-typha:5473:$TYPHA_CLUSTERIP --cert calico-node.crt --key calico-node.key
Result
* Added calico-typha:5473:10.103.120.116 to DNS cache
* Hostname calico-typha was found in DNS cache
* Trying 10.103.120.116:5473...
* Connected to calico-typha (10.103.120.116) port 5473 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* CAfile: typhaca.crt
* CApath: /etc/ssl/certs
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.2 (IN), TLS handshake, Request CERT (13):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, CERT verify (15):
* TLSv1.2 (OUT), TLS header, Finished (20):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS header, Finished (20):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: CN=calico-typha
* start date: Jan 27 19:35:44 2024 GMT
* expire date: Jan 26 19:35:44 2025 GMT
* common name: calico-typha (matched)
* issuer: CN=Calico Typha CA
* SSL certificate verify ok.
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
> GET / HTTP/1.1
> Host: calico-typha:5473
> User-Agent: curl/7.81.0
> Accept: */*
>
* TLSv1.2 (IN), TLS header, Unknown (21):
* TLSv1.2 (IN), TLS alert, close notify (256):
* Empty reply from server
* Closing connection 0
* TLSv1.2 (OUT), TLS header, Unknown (21):
* TLSv1.2 (OUT), TLS alert, close notify (256):
curl: (52) Empty reply from server
This demonstrates that Typha is presenting its TLS certificate and accepting our connection when we presented with the calico/node
certificate and key.
Provision RBAC
Create the ServiceAccount that calico/node
will run as
kubectl create serviceaccount -n kube-system calico-node
Provision a cluster role with permissions to read and modify Calico datastore objects
kubectl apply -f - <<EOF
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-node
rules:
# The CNI plugin needs to get pods, nodes, and namespaces.
- apiGroups: [""]
resources:
- pods
- nodes
- namespaces
verbs:
- get
# EndpointSlices are used for Service-based network policy rule
# enforcement.
- apiGroups: ["discovery.k8s.io"]
resources:
- endpointslices
verbs:
- watch
- list
- apiGroups: [""]
resources:
- endpoints
- services
verbs:
# Used to discover service IPs for advertisement.
- watch
- list
# Used to discover Typhas.
- get
# Pod CIDR auto-detection on kubeadm needs access to config maps.
- apiGroups: [""]
resources:
- configmaps
verbs:
- get
- apiGroups: [""]
resources:
- nodes/status
verbs:
# Needed for clearing NodeNetworkUnavailable flag.
- patch
# Calico stores some configuration information in node annotations.
- update
# Watch for changes to Kubernetes NetworkPolicies.
- apiGroups: ["networking.k8s.io"]
resources:
- networkpolicies
verbs:
- watch
- list
# Used by Calico for policy information.
- apiGroups: [""]
resources:
- pods
- namespaces
- serviceaccounts
verbs:
- list
- watch
# The CNI plugin patches pods/status.
- apiGroups: [""]
resources:
- pods/status
verbs:
- patch
# Used for creating service account tokens to be used by the CNI plugin
- apiGroups: [""]
resources:
- serviceaccounts/token
resourceNames:
- calico-node
verbs:
- create
# Calico monitors various CRDs for config.
- apiGroups: ["crd.projectcalico.org"]
resources:
- globalfelixconfigs
- felixconfigurations
- bgppeers
- globalbgpconfigs
- bgpconfigurations
- ippools
- ipamblocks
- globalnetworkpolicies
- globalnetworksets
- networkpolicies
- networksets
- clusterinformations
- hostendpoints
- blockaffinities
verbs:
- get
- list
- watch
# Calico must create and update some CRDs on startup.
- apiGroups: ["crd.projectcalico.org"]
resources:
- ippools
- felixconfigurations
- clusterinformations
verbs:
- create
- update
# Calico stores some configuration information on the node.
- apiGroups: [""]
resources:
- nodes
verbs:
- get
- list
- watch
# These permissions are required for Calico CNI to perform IPAM allocations.
- apiGroups: ["crd.projectcalico.org"]
resources:
- blockaffinities
- ipamblocks
- ipamhandles
verbs:
- get
- list
- create
- update
- delete
- apiGroups: ["crd.projectcalico.org"]
resources:
- ipamconfigs
verbs:
- get
# Block affinities must also be watchable by confd for route aggregation.
- apiGroups: ["crd.projectcalico.org"]
resources:
- blockaffinities
verbs:
- watch
EOF
Bind the cluster role to the calico-node
ServiceAccount
kubectl create clusterrolebinding calico-node --clusterrole=calico-node --serviceaccount=kube-system:calico-node
Install daemon set
calico/node
runs as a daemon set so that it is installed on every node in the cluster.
Create the daemon set
kubectl apply -f - <<EOF
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: calico-node
spec:
nodeSelector:
kubernetes.io/os: linux
hostNetwork: true
tolerations:
# Make sure calico-node gets scheduled on all nodes.
- effect: NoSchedule
operator: Exists
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- effect: NoExecute
operator: Exists
serviceAccountName: calico-node
# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
terminationGracePeriodSeconds: 0
priorityClassName: system-node-critical
containers:
# Runs calico-node container on each Kubernetes node. This
# container programs network policy and routes on each
# host.
- name: calico-node
image: calico/node:v3.20.0
env:
# Use Kubernetes API as the backing datastore.
- name: DATASTORE_TYPE
value: "kubernetes"
- name: FELIX_TYPHAK8SSERVICENAME
value: calico-typha
# Wait for the datastore.
- name: WAIT_FOR_DATASTORE
value: "true"
# Set based on the k8s node name.
- name: NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Choose the backend to use.
- name: CALICO_NETWORKING_BACKEND
value: bird
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,bgp"
# Auto-detect the BGP IP address.
- name: IP
value: "autodetect"
# Disable file logging so kubectl logs works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Disable IPv6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
# Set Felix logging to "info"
- name: FELIX_LOGSEVERITYSCREEN
value: "info"
- name: FELIX_HEALTHENABLED
value: "true"
# Location of the CA bundle Felix uses to authenticate Typha; volume mount
- name: FELIX_TYPHACAFILE
value: /calico-typha-ca/typhaca.crt
# Common name on the Typha certificate; used to verify we are talking to an authentic typha
- name: FELIX_TYPHACN
value: calico-typha
# Location of the client certificate for connecting to Typha; volume mount
- name: FELIX_TYPHACERTFILE
value: /calico-node-certs/calico-node.crt
# Location of the client certificate key for connecting to Typha; volume mount
- name: FELIX_TYPHAKEYFILE
value: /calico-node-certs/calico-node.key
securityContext:
privileged: true
resources:
requests:
cpu: 250m
lifecycle:
preStop:
exec:
command:
- /bin/calico-node
- -shutdown
livenessProbe:
httpGet:
path: /liveness
port: 9099
host: localhost
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
readinessProbe:
exec:
command:
- /bin/calico-node
- -bird-ready
- -felix-ready
periodSeconds: 10
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /run/xtables.lock
name: xtables-lock
readOnly: false
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
- mountPath: /var/lib/calico
name: var-lib-calico
readOnly: false
- mountPath: /var/run/nodeagent
name: policysync
- mountPath: "/calico-typha-ca"
name: calico-typha-ca
readOnly: true
- mountPath: /calico-node-certs
name: calico-node-certs
readOnly: true
volumes:
# Used by calico-node.
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
- name: var-lib-calico
hostPath:
path: /var/lib/calico
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
# Used to create per-pod Unix Domain Sockets
- name: policysync
hostPath:
type: DirectoryOrCreate
path: /var/run/nodeagent
- name: calico-typha-ca
configMap:
name: calico-typha-ca
- name: calico-node-certs
secret:
secretName: calico-node-certs
EOF
Verify that calico/node
is running on each node in your cluster, and goes to Running
within a few minutes.
kubectl get pod -l k8s-app=calico-node -n kube-system
Result
NAME READY STATUS RESTARTS AGE
calico-node-99ksc 1/1 Running 0 9m51s
calico-node-cbgxr 1/1 Running 0 9m21s
calico-node-j456w 1/1 Running 0 9m42s
calico-node-rflbk 1/1 Running 0 9m32s
calico-node-xlpkh 1/1 Running 0 9m12s