Installing the MetalLB Operator
As a cluster administrator, you can add the MetallB Operator so that the Operator can manage the lifecycle for an instance of MetalLB on your cluster.
The installation procedures use the metallb-system
namespace. You can install the Operator and configure custom resources in a different namespace. The Operator starts MetalLB in the same namespace that the Operator is installed in.
MetalLB and IP failover are incompatible. If you configured IP failover for your cluster, perform the steps to remove IP failover before you install the Operator.
Installing from OperatorHub using the web console
You can install and subscribe to an Operator from OperatorHub using the OKD web console.
Procedure
Navigate in the web console to the Operators → OperatorHub page.
Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type
metallb
to find the MetalLB Operator.You can also filter options by Infrastructure Features. For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments.
Select the Operator to display additional information.
Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing.
Read the information about the Operator and click Install.
On the Install Operator page:
Select an Update Channel (if more than one is available).
Select Automatic or Manual approval strategy, as described earlier.
Click Install to make the Operator available to the selected namespaces on this OKD cluster.
If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan.
After approving on the Install Plan page, the subscription upgrade status moves to Up to date.
If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention.
After the upgrade status of the subscription is Up to date, select Operators → Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should ultimately resolve to InstallSucceeded in the relevant namespace.
For the All namespaces… installation mode, the status resolves to InstallSucceeded in the
openshift-operators
namespace, but the status is Copied if you check in other namespaces.If it does not:
- Check the logs in any pods in the
openshift-operators
project (or other relevant namespace if A specific namespace… installation mode was selected) on the Workloads → Pods page that are reporting issues to troubleshoot further.
- Check the logs in any pods in the
Installing from OperatorHub using the CLI
Instead of using the OKD web console, you can install an Operator from OperatorHub using the CLI. Use the oc
command to create or update a Subscription
object.
Prerequisites
Install the OpenShift CLI (
oc
).Log in as a user with
cluster-admin
privileges.
Procedure
Confirm that the MetalLB Operator is available:
$ oc get packagemanifests -n openshift-marketplace metallb-operator
Example output
NAME CATALOG AGE
metallb-operator Red Hat Operators 9h
Create the
metallb-system
namespace:$ cat << EOF | oc apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: metallb-system
EOF
Optional: To ensure BGP and BFD metrics appear in Prometheus, you can label the namespace as in the following command:
$ oc label ns metallb-system "openshift.io/cluster-monitoring=true"
Create an Operator group custom resource in the namespace:
$ cat << EOF | oc apply -f -
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: metallb-operator
namespace: metallb-system
spec:
targetNamespaces:
- metallb-system
EOF
Confirm the Operator group is installed in the namespace:
$ oc get operatorgroup -n metallb-system
Example output
NAME AGE
metallb-operator 14m
Subscribe to the MetalLB Operator.
Run the following command to get the OKD major and minor version. You use the values to set the
channel
value in the next step.$ OC_VERSION=$(oc version -o yaml | grep openshiftVersion | \
grep -o '[0-9]*[.][0-9]*' | head -1)
To create a subscription custom resource for the Operator, enter the following command:
$ cat << EOF| oc apply -f -
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: metallb-operator-sub
namespace: metallb-system
spec:
channel: "${OC_VERSION}"
name: metallb-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
EOF
Confirm the install plan is in the namespace:
$ oc get installplan -n metallb-system
Example output
NAME CSV APPROVAL APPROVED
install-wzg94 metallb-operator.4.10.0-nnnnnnnnnnnn Automatic true
To verify that the Operator is installed, enter the following command:
$ oc get clusterserviceversion -n metallb-system \
-o custom-columns=Name:.metadata.name,Phase:.status.phase
Example output
Name Phase
metallb-operator.4.10.0-nnnnnnnnnnnn Succeeded
Starting MetalLB on your cluster
After you install the Operator, you need to configure a single instance of a MetalLB custom resource. After you configure the custom resource, the Operator starts MetalLB on your cluster.
Prerequisites
Install the OpenShift CLI (
oc
).Log in as a user with
cluster-admin
privileges.Install the MetalLB Operator.
Procedure
Create a single instance of a MetalLB custom resource:
$ cat << EOF | oc apply -f -
apiVersion: metallb.io/v1beta1
kind: MetalLB
metadata:
name: metallb
namespace: metallb-system
EOF
Verification
Confirm that the deployment for the MetalLB controller and the daemon set for the MetalLB speaker are running.
Check that the deployment for the controller is running:
$ oc get deployment -n metallb-system controller
Example output
NAME READY UP-TO-DATE AVAILABLE AGE
controller 1/1 1 1 11m
Check that the daemon set for the speaker is running:
$ oc get daemonset -n metallb-system speaker
Example output
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
speaker 6 6 6 6 6 kubernetes.io/os=linux 18m
The example output indicates 6 speaker pods. The number of speaker pods in your cluster might differ from the example output. Make sure the output indicates one pod for each node in your cluster.
Limit speaker pods to specific nodes
By default, when you start MetalLB with the MetalLB Operator, the Operator starts an instance of a speaker
pod on each node in the cluster. Only the nodes with a speaker
pod can advertise a load balancer IP address. You can configure the MetalLB
custom resource with a node selector to specify which nodes run the speaker
pods.
The most common reason to limit the speaker
pods to specific nodes is to ensure that only nodes with network interfaces on specific networks advertise load balancer IP addresses. Only the nodes with a running speaker
pod are advertised as destinations of the load balancer IP address.
If you limit the speaker
pods to specific nodes and specify local
for the external traffic policy of a service, then you must ensure that the application pods for the service are deployed to the same nodes.
Example configuration to limit speaker pods to worker nodes
apiVersion: metallb.io/v1beta1
kind: MetalLB
metadata:
name: metallb
namespace: metallb-system
spec:
nodeSelector: (1)
node-role.kubernetes.io/worker: ""
speakerTolerations: (2)
key: "Example"
operator: "Exists"
effect: "NoExecute"
1 | The example configuration specifies to assign the speaker pods to worker nodes, but you can specify labels that you assigned to nodes or any valid node selector. |
2 | In this example configuration, the pod that this toleration is attached to tolerates any taint that matches the key value and effect value using the operator . |
After you apply a manifest with the spec.nodeSelector
field, you can check the number of pods that the Operator deployed with the oc get daemonset -n metallb-system speaker
command. Similarly, you can display the nodes that match your labels with a command like oc get nodes -l node-role.kubernetes.io/worker=
.
You can optionally allow the node to control which speaker pods should, or should not, be scheduled on them by applying a list of tolerations. For more information about taints and tolerations, see the additional resources.
Additional resources
For more information about node selectors, see Placing pods on specific nodes using node selectors.
For more information about taints and tolerations, see Understanding taints and tolerations.