Installing the MetalLB Operator
As a cluster administrator, you can add the MetallB Operator so that the Operator can manage the lifecycle for an instance of MetalLB on your cluster.
MetalLB and IP failover are incompatible. If you configured IP failover for your cluster, perform the steps to remove IP failover before you install the Operator.
Installing the MetalLB Operator from the OperatorHub using the web console
As a cluster administrator, you can install the MetalLB Operator by using the OKD web console.
Prerequisites
- Log in as a user with
cluster-admin
privileges.
Procedure
In the OKD web console, navigate to Operators → OperatorHub.
Type a keyword into the Filter by keyword box or scroll to find the Operator you want. For example, type
metallb
to find the MetalLB Operator.You can also filter options by Infrastructure Features. For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments.
On the Install Operator page, accept the defaults and click Install.
Verification
To confirm that the installation is successful:
Navigate to the Operators → Installed Operators page.
Check that the Operator is installed in the
openshift-operators
namespace and that its status isSucceeded
.
If the Operator is not installed successfully, check the status of the Operator and review the logs:
Navigate to the Operators → Installed Operators page and inspect the
Status
column for any errors or failures.Navigate to the Workloads → Pods page and check the logs in any pods in the
openshift-operators
project that are reporting issues.
Installing from OperatorHub using the CLI
Instead of using the OKD web console, you can install an Operator from OperatorHub using the CLI. You can use the OpenShift CLI (oc
) to install the MetalLB Operator.
It is recommended that when using the CLI you install the Operator in the metallb-system
namespace.
Prerequisites
A cluster installed on bare-metal hardware.
Install the OpenShift CLI (
oc
).Log in as a user with
cluster-admin
privileges.
Procedure
Create a namespace for the MetalLB Operator by entering the following command:
$ cat << EOF | oc apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: metallb-system
EOF
Create an Operator group custom resource (CR) in the namespace:
$ cat << EOF | oc apply -f -
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: metallb-operator
namespace: metallb-system
EOF
Confirm the Operator group is installed in the namespace:
$ oc get operatorgroup -n metallb-system
Example output
NAME AGE
metallb-operator 14m
Create a
Subscription
CR:Define the
Subscription
CR and save the YAML file, for example,metallb-sub.yaml
:apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: metallb-operator-sub
namespace: metallb-system
spec:
channel: stable
name: metallb-operator
source: redhat-operators (1)
sourceNamespace: openshift-marketplace
1 You must specify the redhat-operators
value.To create the
Subscription
CR, run the following command:$ oc create -f metallb-sub.yaml
Optional: To ensure BGP and BFD metrics appear in Prometheus, you can label the namespace as in the following command:
$ oc label ns metallb-system "openshift.io/cluster-monitoring=true"
Verification
The verification steps assume the MetalLB Operator is installed in the metallb-system
namespace.
Confirm the install plan is in the namespace:
$ oc get installplan -n metallb-system
Example output
NAME CSV APPROVAL APPROVED
install-wzg94 metallb-operator.4.0-nnnnnnnnnnnn Automatic true
Installation of the Operator might take a few seconds.
To verify that the Operator is installed, enter the following command:
$ oc get clusterserviceversion -n metallb-system \
-o custom-columns=Name:.metadata.name,Phase:.status.phase
Example output
Name Phase
metallb-operator.4.0-nnnnnnnnnnnn Succeeded
Starting MetalLB on your cluster
After you install the Operator, you need to configure a single instance of a MetalLB custom resource. After you configure the custom resource, the Operator starts MetalLB on your cluster.
Prerequisites
Install the OpenShift CLI (
oc
).Log in as a user with
cluster-admin
privileges.Install the MetalLB Operator.
Procedure
This procedure assumes the MetalLB Operator is installed in the metallb-system
namespace. If you installed using the web console substitute openshift-operators
for the namespace.
Create a single instance of a MetalLB custom resource:
$ cat << EOF | oc apply -f -
apiVersion: metallb.io/v1beta1
kind: MetalLB
metadata:
name: metallb
namespace: metallb-system
EOF
Verification
Confirm that the deployment for the MetalLB controller and the daemon set for the MetalLB speaker are running.
Verify that the deployment for the controller is running:
$ oc get deployment -n metallb-system controller
Example output
NAME READY UP-TO-DATE AVAILABLE AGE
controller 1/1 1 1 11m
Verify that the daemon set for the speaker is running:
$ oc get daemonset -n metallb-system speaker
Example output
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
speaker 6 6 6 6 6 kubernetes.io/os=linux 18m
The example output indicates 6 speaker pods. The number of speaker pods in your cluster might differ from the example output. Make sure the output indicates one pod for each node in your cluster.
Deployment specifications for MetalLB
When you start an instance of MetalLB using the MetalLB
custom resource, you can configure deployment specifications in the MetalLB
custom resource to manage how the controller
or speaker
pods deploy and run in your cluster. Use these deployment specifications to manage the following tasks:
Select nodes for MetalLB pod deployment.
Manage scheduling by using pod priority and pod affinity.
Assign CPU limits for MetalLB pods.
Assign a container RuntimeClass for MetalLB pods.
Assign metadata for MetalLB pods.
Limit speaker pods to specific nodes
By default, when you start MetalLB with the MetalLB Operator, the Operator starts an instance of a speaker
pod on each node in the cluster. Only the nodes with a speaker
pod can advertise a load balancer IP address. You can configure the MetalLB
custom resource with a node selector to specify which nodes run the speaker
pods.
The most common reason to limit the speaker
pods to specific nodes is to ensure that only nodes with network interfaces on specific networks advertise load balancer IP addresses. Only the nodes with a running speaker
pod are advertised as destinations of the load balancer IP address.
If you limit the speaker
pods to specific nodes and specify local
for the external traffic policy of a service, then you must ensure that the application pods for the service are deployed to the same nodes.
Example configuration to limit speaker pods to worker nodes
apiVersion: metallb.io/v1beta1
kind: MetalLB
metadata:
name: metallb
namespace: metallb-system
spec:
nodeSelector: (1)
node-role.kubernetes.io/worker: ""
speakerTolerations: (2)
- key: "Example"
operator: "Exists"
effect: "NoExecute"
1 | The example configuration specifies to assign the speaker pods to worker nodes, but you can specify labels that you assigned to nodes or any valid node selector. |
2 | In this example configuration, the pod that this toleration is attached to tolerates any taint that matches the key value and effect value using the operator . |
After you apply a manifest with the spec.nodeSelector
field, you can check the number of pods that the Operator deployed with the oc get daemonset -n metallb-system speaker
command. Similarly, you can display the nodes that match your labels with a command like oc get nodes -l node-role.kubernetes.io/worker=
.
You can optionally allow the node to control which speaker pods should, or should not, be scheduled on them by using affinity rules. You can also limit these pods by applying a list of tolerations. For more information about affinity rules, taints, and tolerations, see the additional resources.
Configuring a container runtime class in a MetalLB deployment
You can optionally assign a container runtime class to controller
and speaker
pods by configuring the MetalLB custom resource. For example, for Windows workloads, you can assign a Windows runtime class to the pod, which uses this runtime class for all containers in the pod.
Prerequisites
You are logged in as a user with
cluster-admin
privileges.You have installed the MetalLB Operator.
Procedure
Create a
RuntimeClass
custom resource, such asmyRuntimeClass.yaml
, to define your runtime class:apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: myclass
handler: myconfiguration
Apply the
RuntimeClass
custom resource configuration:$ oc apply -f myRuntimeClass.yaml
Create a
MetalLB
custom resource, such asMetalLBRuntime.yaml
, to specify theruntimeClassName
value:apiVersion: metallb.io/v1beta1
kind: MetalLB
metadata:
name: metallb
namespace: metallb-system
spec:
logLevel: debug
controllerConfig:
runtimeClassName: myclass
annotations: (1)
controller: demo
speakerConfig:
runtimeClassName: myclass
annotations: (1)
speaker: demo
1 This example uses annotations
to add metadata such as build release information or GitHub pull request information. You can populate annotations with characters that are not permitted in labels. However, you cannot use annotations to identify or select objects.Apply the
MetalLB
custom resource configuration:$ oc apply -f MetalLBRuntime.yaml
Verification
To view the container runtime for a pod, run the following command:
$ oc get pod -o custom-columns=NAME:metadata.name,STATUS:.status.phase,RUNTIME_CLASS:.spec.runtimeClassName
Configuring pod priority and pod affinity in a MetalLB deployment
You can optionally assign pod priority and pod affinity rules to controller
and speaker
pods by configuring the MetalLB
custom resource. The pod priority indicates the relative importance of a pod on a node and schedules the pod based on this priority. Set a high priority on your controller
or speaker
pod to ensure scheduling priority over other pods on the node.
Pod affinity manages relationships among pods. Assign pod affinity to the controller
or speaker
pods to control on what node the scheduler places the pod in the context of pod relationships. For example, you can use pod affinity rules to ensure that certain pods are located on the same node or nodes, which can help improve network communication and reduce latency between those components.
Prerequisites
You are logged in as a user with
cluster-admin
privileges.You have installed the MetalLB Operator.
You have started the MetalLB Operator on your cluster.
Procedure
Create a
PriorityClass
custom resource, such asmyPriorityClass.yaml
, to configure the priority level. This example defines aPriorityClass
namedhigh-priority
with a value of1000000
. Pods that are assigned this priority class are considered higher priority during scheduling compared to pods with lower priority classes:apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: high-priority
value: 1000000
Apply the
PriorityClass
custom resource configuration:$ oc apply -f myPriorityClass.yaml
Create a
MetalLB
custom resource, such asMetalLBPodConfig.yaml
, to specify thepriorityClassName
andpodAffinity
values:apiVersion: metallb.io/v1beta1
kind: MetalLB
metadata:
name: metallb
namespace: metallb-system
spec:
logLevel: debug
controllerConfig:
priorityClassName: high-priority (1)
affinity:
podAffinity: (2)
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: metallb
topologyKey: kubernetes.io/hostname
speakerConfig:
priorityClassName: high-priority
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: metallb
topologyKey: kubernetes.io/hostname
1 Specifies the priority class for the MetalLB controller pods. In this case, it is set to high-priority
.2 Specifies that you are configuring pod affinity rules. These rules dictate how pods are scheduled in relation to other pods or nodes. This configuration instructs the scheduler to schedule pods that have the label app: metallb
onto nodes that share the same hostname. This helps to co-locate MetalLB-related pods on the same nodes, potentially optimizing network communication, latency, and resource usage between these pods.Apply the
MetalLB
custom resource configuration:$ oc apply -f MetalLBPodConfig.yaml
Verification
To view the priority class that you assigned to pods in the
metallb-system
namespace, run the following command:$ oc get pods -n metallb-system -o custom-columns=NAME:.metadata.name,PRIORITY:.spec.priorityClassName
Example output
NAME PRIORITY
controller-584f5c8cd8-5zbvg high-priority
metallb-operator-controller-manager-9c8d9985-szkqg <none>
metallb-operator-webhook-server-c895594d4-shjgx <none>
speaker-dddf7 high-priority
To verify that the scheduler placed pods according to pod affinity rules, view the metadata for the pod’s node or nodes by running the following command:
$ oc get pod -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name -n metallb-system
Configuring pod CPU limits in a MetalLB deployment
You can optionally assign pod CPU limits to controller
and speaker
pods by configuring the MetalLB
custom resource. Defining CPU limits for the controller
or speaker
pods helps you to manage compute resources on the node. This ensures all pods on the node have the necessary compute resources to manage workloads and cluster housekeeping.
Prerequisites
You are logged in as a user with
cluster-admin
privileges.You have installed the MetalLB Operator.
Procedure
Create a
MetalLB
custom resource file, such asCPULimits.yaml
, to specify thecpu
value for thecontroller
andspeaker
pods:apiVersion: metallb.io/v1beta1
kind: MetalLB
metadata:
name: metallb
namespace: metallb-system
spec:
logLevel: debug
controllerConfig:
resources:
limits:
cpu: "200m"
speakerConfig:
resources:
limits:
cpu: "300m"
Apply the
MetalLB
custom resource configuration:$ oc apply -f CPULimits.yaml
Verification
To view compute resources for a pod, run the following command, replacing
<pod_name>
with your target pod:$ oc describe pod <pod_name>