Rancher Agents

There are two different agent resources deployed on Rancher managed clusters:

For a conceptual overview of how the Rancher server provisions clusters and communicates with them, refer to the architecture.

cattle-cluster-agent

The cattle-cluster-agent is used to connect to the Kubernetes API of Rancher Launched Kubernetes clusters. The cattle-cluster-agent is deployed using a Deployment resource.

cattle-node-agent

The cattle-node-agent is used to interact with nodes in a Rancher Launched Kubernetes cluster when performing cluster operations. Examples of cluster operations are upgrading Kubernetes version and creating/restoring etcd snapshots. The cattle-node-agent is deployed using a DaemonSet resource to make sure it runs on every node. The cattle-node-agent is used as fallback option to connect to the Kubernetes API of Rancher Launched Kubernetes clusters when cattle-cluster-agent is unavailable.

Requests

The cattle-cluster-agent pod does not define the default CPU and memory request values. As a baseline, we recommend setting the CPU request at 50m and memory request at 100Mi. However, it is important that you assess your use case appropriately and that you allocate the correct resources to your cluster for your needs.

To configure request values through the UI:

  • RKE
  • RKE2/K3s
  1. When you create or edit an existing cluster, go to the Cluster Options section.
  2. Expand the Cluster Configuration subsection.
  3. Configure your request values using the CPU Requests and Memory Requests fields as needed.

  4. When you create or edit an existing cluster, go to the Cluster Configuration.

  5. Select the Cluster Agent subsection.
  6. Configure your request values using the CPU Reservation and Memory Reservation fields as needed.

If you prefer to configure via YAML, add the following snippet to your configuration file:

  • RKE
  • RKE2/K3s
  1. cluster_agent_deployment_customization:
  2. override_resource_requirements:
  3. requests:
  4. cpu: 50m
  5. memory: 100Mi
  1. spec:
  2. clusterAgentDeploymentCustomization:
  3. overrideResourceRequirements:
  4. requests:
  5. cpu: 50m
  6. memory: 100Mi

Scheduling rules

The cattle-cluster-agent uses either a fixed set of tolerations, or dynamically-added tolerations based on taints applied to the control plane nodes. This structure allows Taint based Evictions to work properly for cattle-cluster-agent.

If control plane nodes are present in the cluster, the default tolerations will be replaced with tolerations matching the taints on the control plane nodes. The default set of tolerations are described below.

ComponentnodeAffinity nodeSelectorTermsnodeSelectorTolerations
cattle-cluster-agentbeta.kubernetes.io/os:NotIn:windowsnoneNote: These are the default tolerations, and will be replaced by tolerations matching taints applied to controlplane nodes.

effect:NoSchedule
key:node-role.kubernetes.io/controlplane
value:true

effect:NoSchedule
key:node-role.kubernetes.io/control-plane
operator:Exists

effect:NoSchedule
key:node-role.kubernetes.io/master
operator:Exists
cattle-node-agentbeta.kubernetes.io/os:NotIn:windowsnoneoperator:Exists

The cattle-cluster-agent Deployment has preferred scheduling rules using preferredDuringSchedulingIgnoredDuringExecution, favoring to be scheduled on nodes with the controlplane node. When there are no controlplane nodes visible in the cluster (this is usually the case when using Clusters from Hosted Kubernetes Providers), you can add the label cattle.io/cluster-agent=true on a node to prefer scheduling the cattle-cluster-agent pod to that node.

See Kubernetes: Assigning Pods to Nodes to find more information about scheduling rules.

The preferredDuringSchedulingIgnoredDuringExecution configuration is shown in the table below:

WeightExpression
100node-role.kubernetes.io/controlplane:In:”true”
100node-role.kubernetes.io/control-plane:In:”true”
100node-role.kubernetes.io/master:In:”true”
1cattle.io/cluster-agent:In:”true”