Migrating from the OpenShift SDN cluster network provider
As a cluster administrator, you can migrate to the OVN-Kubernetes Container Network Interface (CNI) cluster network provider from the OpenShift SDN CNI cluster network provider.
To learn more about OVN-Kubernetes, read About the OVN-Kubernetes network provider.
Migration to the OVN-Kubernetes network provider
Migrating to the OVN-Kubernetes Container Network Interface (CNI) cluster network provider is a manual process that includes some downtime during which your cluster is unreachable. Although a rollback procedure is provided, the migration is intended to be a one-way process.
A migration to the OVN-Kubernetes cluster network provider is supported on installer-provisioned clusters on the following platforms:
Bare metal hardware
Amazon Web Services (AWS)
Google Cloud Platform (GCP)
Microsoft Azure
Red Hat OpenStack Platform (RHOSP)
VMware vSphere
Performing a migration on a user-provisioned cluster is not supported. |
Considerations for migrating to the OVN-Kubernetes network provider
The subnets assigned to nodes and the IP addresses assigned to individual pods are not preserved during the migration.
While the OVN-Kubernetes network provider implements many of the capabilities present in the OpenShift SDN network provider, the configuration is not the same.
If your cluster uses any of the following OpenShift SDN capabilities, you must manually configure the same capability in OVN-Kubernetes:
Namespace isolation
Egress IP addresses
Egress network policies
Egress router pods
Multicast
If your cluster uses any part of the
100.64.0.0/16
IP address range, you cannot migrate to OVN-Kubernetes because it uses this IP address range internally.
The following sections highlight the differences in configuration between the aforementioned capabilities in OVN-Kubernetes and OpenShift SDN.
Namespace isolation
OVN-Kubernetes supports only the network policy isolation mode.
If your cluster uses OpenShift SDN configured in either the multitenant or subnet isolation modes, you cannot migrate to the OVN-Kubernetes network provider. |
Egress IP addresses
The differences in configuring an egress IP address between OVN-Kubernetes and OpenShift SDN is described in the following table:
OVN-Kubernetes | OpenShift SDN |
---|---|
|
|
For more information on using egress IP addresses in OVN-Kubernetes, see “Configuring an egress IP address”.
Egress network policies
The difference in configuring an egress network policy, also known as an egress firewall, between OVN-Kubernetes and OpenShift SDN is described in the following table:
OVN-Kubernetes | OpenShift SDN |
---|---|
|
|
For more information on using an egress firewall in OVN-Kubernetes, see “Configuring an egress firewall for a project”.
Egress router pods
OVN-Kubernetes does not support using egress router pods in OKD 4.7.
Multicast
The difference between enabling multicast traffic on OVN-Kubernetes and OpenShift SDN is described in the following table:
OVN-Kubernetes | OpenShift SDN |
---|---|
|
|
For more information on using multicast in OVN-Kubernetes, see “Enabling multicast for a project”.
Network policies
OVN-Kubernetes fully supports the Kubernetes NetworkPolicy
API in the networking.k8s.io/v1
API group. No changes are necessary in your network policies when migrating from OpenShift SDN.
How the migration process works
The migration process works as follows:
Set a temporary annotation set on the Cluster Network Operator (CNO) configuration object. This annotation triggers the CNO to watch for a change to the
defaultNetwork
field.Suspend the Machine Config Operator (MCO) to ensure that it does not interrupt the migration.
Update the
defaultNetwork
field. The update causes the CNO to destroy the OpenShift SDN control plane pods and deploy the OVN-Kubernetes control plane pods. Additionally, it updates the Multus objects to reflect the new cluster network provider.Reboot each node in the cluster. Because the existing pods in the cluster are unaware of the change to the cluster network provider, rebooting each node ensures that each node is drained of pods. New pods are attached to the new cluster network provided by OVN-Kubernetes.
Enable the MCO after all nodes in the cluster reboot. The MCO rolls out an update to the systemd configuration necessary to complete the migration. The MCO updates a single machine per pool at a time by default, so the total time the migration takes increases with the size of the cluster.
Migrating to the OVN-Kubernetes default CNI network provider
As a cluster administrator, you can change the default Container Network Interface (CNI) network provider for your cluster to OVN-Kubernetes. During the migration, you must reboot every node in your cluster.
While performing the migration, your cluster is unavailable and workloads might be interrupted. Perform the migration only when an interruption in service is acceptable. |
Prerequisites
A cluster installed on installer-provisioned infrastructure and configured with the OpenShift SDN default CNI network provider in the network policy isolation mode.
Install the OpenShift CLI (
oc
).Access to the cluster as a user with the
cluster-admin
role.A recent backup of the etcd database is available.
The cluster is in a known good state, without any errors.
A reboot can be triggered manually for each node.
Procedure
To backup the configuration for the cluster network, enter the following command:
$ oc get Network.config.openshift.io cluster -o yaml > cluster-openshift-sdn.yaml
To enable the migration, set an annotation on the Cluster Network Operator configuration object by entering the following command:
$ oc annotate Network.operator.openshift.io cluster \
'networkoperator.openshift.io/network-migration'=""
Stop all of the machine configuration pools managed by the Machine Config Operator (MCO):
Stop the master configuration pool:
$ oc patch MachineConfigPool master --type='merge' --patch \
'{ "spec": { "paused": true } }'
Stop the worker configuration pool:
$ oc patch MachineConfigPool worker --type='merge' --patch \
'{ "spec":{ "paused" :true } }'
Configure the OVN-Kubernetes cluster network provider by using one of the following commands:
To specify the network provider without changing the cluster network IP address block, enter the following command:
$ oc patch Network.config.openshift.io cluster \
--type='merge' --patch '{ "spec": { "networkType": "OVNKubernetes" } }'
To specify a different cluster network IP address block, enter the following command:
$ oc patch Network.config.openshift.io cluster \
--type='merge' --patch '{
"spec": {
"clusterNetwork": [
{
"cidr": "<cidr>",
"hostPrefix": "<prefix>"
}
],
"networkType": "OVNKubernetes"
}
}'
where
cidr
is a CIDR block andprefix
is the slice of the CIDR block apportioned to each node in your cluster. You cannot use any CIDR block that overlaps with the100.64.0.0/16
CIDR block, because the OVN-Kubernetes network provider uses this block internally.You cannot change the service network address block during the migration.
Optional: You can customize the following settings for OVN-Kubernetes to meet your network infrastructure requirements:
Maximum transmission unit (MTU)
Geneve (Generic Network Virtualization Encapsulation) overlay network port
To customize either of the previously noted settings, enter and customize the following command. If you do not need to change the default value, omit the key from the patch.
$ oc patch Network.operator.openshift.io cluster --type=merge \
--patch '{
"spec":{
"defaultNetwork":{
"ovnKubernetesConfig":{
"mtu":<mtu>,
"genevePort":<port>
}}}}'
mtu
The MTU for the Geneve overlay network. This value is normally configured automatically, but if the nodes in your cluster do not all use the same MTU, then you must set this explicitly to
100
less than the smallest node MTU value.port
The UDP port for the Geneve overlay network. If a value is not specified, the default is
6081
. The port cannot be the same as the VXLAN port that is used by OpenShift SDN. The default value for the VXLAN port is4789
.Example patch command to update
mtu
field$ oc patch Network.operator.openshift.io cluster --type=merge \
--patch '{
"spec":{
"defaultNetwork":{
"ovnKubernetesConfig":{
"mtu":1200
}}}}'
Wait until the Multus daemon set rollout completes.
$ oc -n openshift-multus rollout status daemonset/multus
The name of the Multus pods is in form of
multus-<xxxxx>
where<xxxxx>
is a random sequence of letters. It might take several moments for the pods to restart.Example output
Waiting for daemon set "multus" rollout to finish: 1 out of 6 new pods have been updated...
...
Waiting for daemon set "multus" rollout to finish: 5 of 6 updated pods are available...
daemon set "multus" successfully rolled out
To complete the migration, reboot each node in your cluster. For example, you can use a bash script similar to the following example. The script assumes that you can connect to each host by using
ssh
and that you have configuredsudo
to not prompt for a password.#!/bin/bash
for ip in $(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}')
do
echo "reboot node $ip"
ssh -o StrictHostKeyChecking=no core@$ip sudo shutdown -r -t 3
done
If ssh access is not available, you might be able to reboot each node through the management portal for your infrastructure provider.
After the nodes in your cluster have rebooted, start all of the machine configuration pools:
Start the master configuration pool:
$ oc patch MachineConfigPool master --type='merge' --patch \
'{ "spec": { "paused": false } }'
Start the worker configuration pool:
$ oc patch MachineConfigPool worker --type='merge' --patch \
'{ "spec": { "paused": false } }'
As the MCO updates machines in each config pool, it reboots each node.
By default the MCO updates a single machine per pool at a time, so the time that the migration requires to complete grows with the size of the cluster.
Confirm the status of the new machine configuration on the hosts:
To list the machine configuration state and the name of the applied machine configuration, enter the following command:
$ oc describe node | egrep "hostname|machineconfig"
Example output
kubernetes.io/hostname=master-0
machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b
machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b
machineconfiguration.openshift.io/reason:
machineconfiguration.openshift.io/state: Done
Verify that the following statements are true:
The value of
machineconfiguration.openshift.io/state
field isDone
.The value of the
machineconfiguration.openshift.io/currentConfig
field is equal to the value of themachineconfiguration.openshift.io/desiredConfig
field.
To confirm that the machine config is correct, enter the following command:
$ oc get machineconfig <config_name> -o yaml | grep ExecStart
where
<config_name>
is the name of the machine config from themachineconfiguration.openshift.io/currentConfig
field.The machine config must include the following update to the systemd configuration:
ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes
Confirm that the migration succeeded:
To confirm that the default CNI network provider is OVN-Kubernetes, enter the following command. The value of
status.networkType
must beOVNKubernetes
.$ oc get network.config/cluster -o jsonpath='{.status.networkType}{"\n"}'
To confirm that the cluster nodes are in the
Ready
state, enter the following command:$ oc get nodes
If a node is stuck in the
NotReady
state, investigate the machine config daemon pod logs and resolve any errors.To list the pods, enter the following command:
$ oc get pod -n openshift-machine-config-operator
Example output
NAME READY STATUS RESTARTS AGE
machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m
machine-config-daemon-5cf4b 2/2 Running 0 43h
machine-config-daemon-7wzcd 2/2 Running 0 43h
machine-config-daemon-fc946 2/2 Running 0 43h
machine-config-daemon-g2v28 2/2 Running 0 43h
machine-config-daemon-gcl4f 2/2 Running 0 43h
machine-config-daemon-l5tnv 2/2 Running 0 43h
machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m
machine-config-server-bsc8h 1/1 Running 0 43h
machine-config-server-hklrm 1/1 Running 0 43h
machine-config-server-k9rtx 1/1 Running 0 43h
The names for the config daemon pods are in the following format:
machine-config-daemon-<seq>
. The<seq>
value is a random five character alphanumeric sequence.Display the pod log for the first machine config daemon pod shown in the previous output by enter the following command:
$ oc logs <pod> -n openshift-machine-config-operator
where
pod
is the name of a machine config daemon pod.Resolve any errors in the logs shown by the output from the previous command.
To confirm that your pods are not in an error state, enter the following command:
$ oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'
If pods on a node are in an error state, reboot that node.
Complete the following steps only if the migration succeeds and your cluster is in a good state:
To remove the migration annotation from the Cluster Network Operator configuration object, enter the following command:
$ oc annotate Network.operator.openshift.io cluster \
networkoperator.openshift.io/network-migration-
To remove the OpenShift SDN network provider namespace, enter the following command:
$ oc delete namespace openshift-sdn
Additional resources
Configuration parameters for the OVN-Kubernetes default CNI network provider
OVN-Kubernetes capabilities
OpenShift SDN capabilities
[Network [operator.openshift.io/v1]($905cd0e67ec941bc.md#network-operator-openshift-io-v1)]