Advanced migration options
You can automate your migrations and modify the MigPlan
and MigrationController
custom resources in order to perform large-scale migrations and to improve performance.
Terminology
Term | Definition |
---|---|
Source cluster | Cluster from which the applications are migrated. |
Destination cluster[1] | Cluster to which the applications are migrated. |
Replication repository | Object storage used for copying images, volumes, and Kubernetes objects during indirect migration or for Kubernetes objects during direct volume migration or direct image migration. The replication repository must be accessible to all clusters. |
Host cluster | Cluster on which the The host cluster does not require an exposed registry route for direct image migration. |
Remote cluster | A remote cluster is usually the source cluster but this is not required. A remote cluster requires a A remote cluster requires an exposed secure registry route for direct image migration. |
Indirect migration | Images, volumes, and Kubernetes objects are copied from the source cluster to the replication repository and then from the replication repository to the destination cluster. |
Direct volume migration | Persistent volumes are copied directly from the source cluster to the destination cluster. |
Direct image migration | Images are copied directly from the source cluster to the destination cluster. |
Stage migration | Data is copied to the destination cluster without stopping the application. Running a stage migration multiple times reduces the duration of the cutover migration. |
Cutover migration | The application is stopped on the source cluster and its resources are migrated to the destination cluster. |
State migration | Application state is migrated by copying specific persistent volume claims and Kubernetes objects to the destination cluster. |
Rollback migration | Rollback migration rolls back a completed migration. |
1 Called the target cluster in the MTC web console.
Migrating an application from on-premises to a cloud-based cluster
You can migrate from a source cluster that is behind a firewall to a cloud-based destination cluster by establishing a network tunnel between the two clusters. The crane tunnel-api
command establishes such a tunnel by creating a VPN tunnel on the source cluster and then connecting to a VPN server running on the destination cluster. The VPN server is exposed to the client using a load balancer address on the destination cluster.
A service created on the destination cluster exposes the source cluster’s API to MTC, which is running on the destination cluster.
Prerequisites
The system that creates the VPN tunnel must have access and be logged in to both clusters.
It must be possible to create a load balancer on the destination cluster. Refer to your cloud provider to ensure this is possible.
Have names prepared to assign to namespaces, on both the source cluster and the destination cluster, in which to run the VPN tunnel. These namespaces should not be created in advance. For information about namespace rules, see https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names.
When connecting multiple firewall-protected source clusters to the cloud cluster, each source cluster requires its own namespace.
OpenVPN server is installed on the destination cluster.
OpenVPN client is installed on the source cluster.
When configuring the source cluster in MTC, the API URL takes the form of
https://proxied-cluster.<namespace>.svc.cluster.local:8443
.If you use the API, see Create a MigCluster CR manifest for each remote cluster.
If you use the MTC web console, see Migrating your applications using the MTC web console.
The MTC web console and Migration Controller must be installed on the target cluster.
Procedure
Install the
crane
utility:$ podman cp $(podman create registry.redhat.io/rhmtc/openshift-migration-controller-rhel8:v1.7.0):/crane ./
Log in remotely to a node on the source cluster and a node on the destination cluster.
Obtain the cluster context for both clusters after logging in:
$ oc config view
Establish a tunnel by entering the following command on the command system:
$ crane tunnel-api [--namespace <namespace>] \
--destination-context <destination-cluster> \
--source-context <source-cluster>
If you don’t specify a namespace, the command uses the default value
openvpn
.For example:
$ crane tunnel-api --namespace my_tunnel \
--destination-context openshift-migration/c131-e-us-east-containers-cloud-ibm-com/admin \
--source-context default/192-168-122-171-nip-io:8443/admin
See all available parameters for the
crane tunnel-api
command by enteringcrane tunnel-api —help
.The command generates TSL/SSL Certificates. This process might take several minutes. A message appears when the process completes.
The OpenVPN server starts on the destination cluster and the OpenVPN client starts on the source cluster.
After a few minutes, the load balancer resolves on the source node.
You can view the log for the OpenVPN pods to check the status of this process by entering the following commands with root privileges:
# oc get po -n <namespace>
Example outputNAME READY STATUS RESTARTS AGE
<pod_name> 2/2 Running 0 44s
# oc logs -f -n <namespace> <pod_name> -c openvpn
When the address of the load balancer is resolved, the message
Initialization Sequence Completed
appears at the end of the log.On the OpenVPN server, which is on a destination control node, verify that the
openvpn
service and theproxied-cluster
service are running:$ oc get service -n <namespace>
On the source node, get the service account (SA) token for the migration controller:
# oc sa get-token -n openshift-migration migration-controller
Open the MTC web console and add the source cluster, using the following values:
Cluster name: The source cluster name.
URL:
proxied-cluster.<namespace>.svc.cluster.local:8443
. If you did not define a value for<namespace>
, useopenvpn
.Service account token: The token of the migration controller service account.
Exposed route host to image registry:
proxied-cluster.<namespace>.svc.cluster.local:5000
. If you did not define a value for<namespace>
, useopenvpn
.
After MTC has successfully validated the connection, you can proceed to create and run a migration plan. The namespace for the source cluster should appear in the list of namespaces.
Additional resources
For information about creating a MigCluster CR manifest for each remote cluster, see Migrating an application by using the MTC API.
For information about adding a cluster using the web console, see Migrating your applications by using the MTC web console
Migrating applications by using the command line
You can migrate applications with the MTC API by using the command line interface (CLI) in order to automate the migration.
Migration prerequisites
- You must be logged in as a user with
cluster-admin
privileges on all clusters.
Direct image migration
You must ensure that the secure internal registry of the source cluster is exposed.
You must create a route to the exposed registry.
Direct volume migration
- If your clusters use proxies, you must configure an Stunnel TCP proxy.
Internal images
If your application uses internal images from the
openshift
namespace, you must ensure that the required versions of the images are present on the target cluster.You can manually update an image stream tag in order to use a deprecated OKD 3 image on an OKD 4.10 cluster.
Clusters
The source cluster must be upgraded to the latest MTC z-stream release.
The MTC version must be the same on all clusters.
Network
The clusters have unrestricted network access to each other and to the replication repository.
If you copy the persistent volumes with
move
, the clusters must have unrestricted network access to the remote volumes.You must enable the following ports on an OKD 3 cluster:
8443
(API server)443
(routes)53
(DNS)
You must enable the following ports on an OKD 4 cluster:
6443
(API server)443
(routes)53
(DNS)
You must enable port
443
on the replication repository if you are using TLS.
Persistent volumes (PVs)
The PVs must be valid.
The PVs must be bound to persistent volume claims.
If you use snapshots to copy the PVs, the following additional prerequisites apply:
The cloud provider must support snapshots.
The PVs must have the same cloud provider.
The PVs must be located in the same geographic region.
The PVs must have the same storage class.
Creating a registry route for direct image migration
For direct image migration, you must create a route to the exposed internal registry on all remote clusters.
Prerequisites
The internal registry must be exposed to external traffic on all remote clusters.
The OKD 4 registry is exposed by default.
The OKD 3 registry must be exposed manually.
Procedure
To create a route to an OKD 3 registry, run the following command:
$ oc create route passthrough --service=docker-registry -n default
To create a route to an OKD 4 registry, run the following command:
$ oc create route passthrough --service=image-registry -n openshift-image-registry
Configuring proxies
For OKD 4.1 and earlier versions, you must configure proxies in the MigrationController
custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy
object.
For OKD 4.2 to 4.10, the Migration Toolkit for Containers (MTC) inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings.
You must configure the proxies to allow the SPDY protocol and to forward the Upgrade HTTP
header to the API server. Otherwise, an Upgrade request required
error is displayed. The MigrationController
CR uses SPDY to run commands within remote pods. The Upgrade HTTP
header is required in order to open a websocket connection with the API server.
Direct volume migration
If you are performing a direct volume migration (DVM) from a source cluster behind a proxy, you must configure an Stunnel proxy. Stunnel creates a transparent tunnel between the source and target clusters for the TCP connection without changing the certificates.
DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy.
Prerequisites
- You must be logged in as a user with
cluster-admin
privileges on all clusters.
Procedure
Get the
MigrationController
CR manifest:$ oc get migrationcontroller <migration_controller> -n openshift-migration
Update the proxy parameters:
apiVersion: migration.openshift.io/v1alpha1
kind: MigrationController
metadata:
name: <migration_controller>
namespace: openshift-migration
...
spec:
stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> (1)
httpProxy: http://<username>:<password>@<ip>:<port> (2)
httpsProxy: http://<username>:<password>@<ip>:<port> (3)
noProxy: example.com (4)
1 Stunnel proxy URL for direct volume migration. 2 Proxy URL for creating HTTP connections outside the cluster. The URL scheme must be http
.3 Proxy URL for creating HTTPS connections outside the cluster. If this is not specified, then httpProxy
is used for both HTTP and HTTPS connections.4 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with
.
to match subdomains only. For example,.y.com
matchesx.y.com
, but noty.com
. Use*
to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by thenetworking.machineNetwork[].cidr
field from the installation configuration, you must add them to this list to prevent connection issues.This field is ignored if neither the
httpProxy
nor thehttpsProxy
field is set.Save the manifest as
migration-controller.yaml
.Apply the updated manifest:
$ oc replace -f migration-controller.yaml -n openshift-migration
Migrating an application by using the MTC API
You can migrate an application from the command line by using the Migration Toolkit for Containers (MTC) API.
Procedure
Create a
MigCluster
CR manifest for the host cluster:$ cat << EOF | oc apply -f -
apiVersion: migration.openshift.io/v1alpha1
kind: MigCluster
metadata:
name: <host_cluster>
namespace: openshift-migration
spec:
isHostCluster: true
EOF
Create a
Secret
object manifest for each remote cluster:$ cat << EOF | oc apply -f -
apiVersion: v1
kind: Secret
metadata:
name: <cluster_secret>
namespace: openshift-config
type: Opaque
data:
saToken: <sa_token> (1)
EOF
1 Specify the base64-encoded migration-controller
service account (SA) token of the remote cluster. You can obtain the token by running the following command:$ oc sa get-token migration-controller -n openshift-migration | base64 -w 0
Create a
MigCluster
CR manifest for each remote cluster:$ cat << EOF | oc apply -f -
apiVersion: migration.openshift.io/v1alpha1
kind: MigCluster
metadata:
name: <remote_cluster> (1)
namespace: openshift-migration
spec:
exposedRegistryPath: <exposed_registry_route> (2)
insecure: false (3)
isHostCluster: false
serviceAccountSecretRef:
name: <remote_cluster_secret> (4)
namespace: openshift-config
url: <remote_cluster_url> (5)
EOF
1 Specify the Cluster
CR of the remote cluster.2 Optional: For direct image migration, specify the exposed registry route. 3 SSL verification is enabled if false
. CA certificates are not required or checked iftrue
.4 Specify the Secret
object of the remote cluster.5 Specify the URL of the remote cluster. Verify that all clusters are in a
Ready
state:$ oc describe cluster <cluster>
Create a
Secret
object manifest for the replication repository:$ cat << EOF | oc apply -f -
apiVersion: v1
kind: Secret
metadata:
namespace: openshift-config
name: <migstorage_creds>
type: Opaque
data:
aws-access-key-id: <key_id_base64> (1)
aws-secret-access-key: <secret_key_base64> (2)
EOF
1 Specify the key ID in base64 format. 2 Specify the secret key in base64 format. AWS credentials are base64-encoded by default. For other storage providers, you must encode your credentials by running the following command with each key:
$ echo -n "<key>" | base64 -w 0 (1)
1 Specify the key ID or the secret key. Both keys must be base64-encoded. Create a
MigStorage
CR manifest for the replication repository:$ cat << EOF | oc apply -f -
apiVersion: migration.openshift.io/v1alpha1
kind: MigStorage
metadata:
name: <migstorage>
namespace: openshift-migration
spec:
backupStorageConfig:
awsBucketName: <bucket> (1)
credsSecretRef:
name: <storage_secret> (2)
namespace: openshift-config
backupStorageProvider: <storage_provider> (3)
volumeSnapshotConfig:
credsSecretRef:
name: <storage_secret> (4)
namespace: openshift-config
volumeSnapshotProvider: <storage_provider> (5)
EOF
1 Specify the bucket name. 2 Specify the Secrets
CR of the object storage. You must ensure that the credentials stored in theSecrets
CR of the object storage are correct.3 Specify the storage provider. 4 Optional: If you are copying data by using snapshots, specify the Secrets
CR of the object storage. You must ensure that the credentials stored in theSecrets
CR of the object storage are correct.5 Optional: If you are copying data by using snapshots, specify the storage provider. Verify that the
MigStorage
CR is in aReady
state:$ oc describe migstorage <migstorage>
Create a
MigPlan
CR manifest:$ cat << EOF | oc apply -f -
apiVersion: migration.openshift.io/v1alpha1
kind: MigPlan
metadata:
name: <migplan>
namespace: openshift-migration
spec:
destMigClusterRef:
name: <host_cluster>
namespace: openshift-migration
indirectImageMigration: true (1)
indirectVolumeMigration: true (2)
migStorageRef:
name: <migstorage> (3)
namespace: openshift-migration
namespaces:
- <source_namespace_1> (4)
- <source_namespace_2>
- <source_namespace_3>:<destination_namespace> (5)
srcMigClusterRef:
name: <remote_cluster> (6)
namespace: openshift-migration
EOF
1 Direct image migration is enabled if false
.2 Direct volume migration is enabled if false
.3 Specify the name of the MigStorage
CR instance.4 Specify one or more source namespaces. By default, the destination namespace has the same name. 5 Specify a destination namespace if it is different from the source namespace. 6 Specify the name of the source cluster MigCluster
instance.Verify that the
MigPlan
instance is in aReady
state:$ oc describe migplan <migplan> -n openshift-migration
Create a
MigMigration
CR manifest to start the migration defined in theMigPlan
instance:$ cat << EOF | oc apply -f -
apiVersion: migration.openshift.io/v1alpha1
kind: MigMigration
metadata:
name: <migmigration>
namespace: openshift-migration
spec:
migPlanRef:
name: <migplan> (1)
namespace: openshift-migration
quiescePods: true (2)
stage: false (3)
rollback: false (4)
EOF
1 Specify the MigPlan
CR name.2 The pods on the source cluster are stopped before migration if true
.3 A stage migration, which copies most of the data without stopping the application, is performed if true
.4 A completed migration is rolled back if true
.Verify the migration by watching the
MigMigration
CR progress:$ oc watch migmigration <migmigration> -n openshift-migration
The output resembles the following:
Example output
Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc
Namespace: openshift-migration
Labels: migration.openshift.io/migplan-name=django
Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c
API Version: migration.openshift.io/v1alpha1
Kind: MigMigration
...
Spec:
Mig Plan Ref:
Name: migplan
Namespace: openshift-migration
Stage: false
Status:
Conditions:
Category: Advisory
Last Transition Time: 2021-02-02T15:04:09Z
Message: Step: 19/47
Reason: InitialBackupCreated
Status: True
Type: Running
Category: Required
Last Transition Time: 2021-02-02T15:03:19Z
Message: The migration is ready.
Status: True
Type: Ready
Category: Required
Durable: true
Last Transition Time: 2021-02-02T15:04:05Z
Message: The migration registries are healthy.
Status: True
Type: RegistriesHealthy
Itinerary: Final
Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5
Phase: InitialBackupCreated
Pipeline:
Completed: 2021-02-02T15:04:07Z
Message: Completed
Name: Prepare
Started: 2021-02-02T15:03:18Z
Message: Waiting for initial Velero backup to complete.
Name: Backup
Phase: InitialBackupCreated
Progress:
Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s)
Started: 2021-02-02T15:04:07Z
Message: Not started
Name: StageBackup
Message: Not started
Name: StageRestore
Message: Not started
Name: DirectImage
Message: Not started
Name: DirectVolume
Message: Not started
Name: Restore
Message: Not started
Name: Cleanup
Start Timestamp: 2021-02-02T15:03:18Z
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Running 57s migmigration_controller Step: 2/47
Normal Running 57s migmigration_controller Step: 3/47
Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47
Normal Running 54s migmigration_controller Step: 5/47
Normal Running 54s migmigration_controller Step: 6/47
Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47
Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47
Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready.
Normal Running 50s migmigration_controller Step: 9/47
Normal Running 50s migmigration_controller Step: 10/47
State migration
You can perform repeatable, state-only migrations by using Migration Toolkit for Containers (MTC) to migrate persistent volume claims (PVCs) that constitute an application’s state. You migrate specified PVCs by excluding other PVCs from the migration plan. Persistent volume (PV) data is copied to the target cluster. The PV references are not moved. The application pods continue to run on the source cluster. You can map the PVCs to ensure that the source and target PVCs are synchronized.
You can perform a one-time migration of Kubernetes objects that constitute an application’s state.
If you have a CI/CD pipeline, you can migrate stateless components by deploying them on the target cluster. Then you can migrate stateful components by using MTC.
You can perform a state migration between clusters or within the same cluster.
State migration migrates only the components that constitute an application’s state. If you want to migrate an entire namespace, use stage or cutover migration. |
Additional resources
See Excluding PVCs from migration to select PVCs for state migration.
See Mapping PVCs to migrate source PV data to provisioned PVCs on the destination cluster.
See Migrating Kubernetes objects to migrate the Kubernetes objects that constitute an application’s state.
Migration hooks
You can add up to four migration hooks to a single migration plan, with each hook running at a different phase of the migration. Migration hooks perform tasks such as customizing application quiescence, manually migrating unsupported data types, and updating applications after migration.
A migration hook runs on a source or a target cluster at one of the following migration steps:
PreBackup
: Before resources are backed up on the source cluster.PostBackup
: After resources are backed up on the source cluster.PreRestore
: Before resources are restored on the target cluster.PostRestore
: After resources are restored on the target cluster.
You can create a hook by creating an Ansible playbook that runs with the default Ansible image or with a custom hook container.
Ansible playbook
The Ansible playbook is mounted on a hook container as a config map. The hook container runs as a job, using the cluster, service account, and namespace specified in the MigPlan
custom resource. The job continues to run until it reaches the default limit of 6 retries or a successful completion. This continues even if the initial pod is evicted or killed.
The default Ansible runtime image is registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel7:1.7
. This image is based on the Ansible Runner image and includes python-openshift
for Ansible Kubernetes resources and an updated oc
binary.
Custom hook container
You can use a custom hook container instead of the default Ansible image.
Writing an Ansible playbook for a migration hook
You can write an Ansible playbook to use as a migration hook. The hook is added to a migration plan by using the MTC web console or by specifying values for the spec.hooks
parameters in the MigPlan
custom resource (CR) manifest.
The Ansible playbook is mounted onto a hook container as a config map. The hook container runs as a job, using the cluster, service account, and namespace specified in the MigPlan
CR. The hook container uses a specified service account token so that the tasks do not require authentication before they run in the cluster.
Ansible modules
You can use the Ansible shell
module to run oc
commands.
Example shell
module
- hosts: localhost
gather_facts: false
tasks:
- name: get pod name
shell: oc get po --all-namespaces
You can use kubernetes.core
modules, such as k8s_info
, to interact with Kubernetes resources.
Example k8s_facts
module
- hosts: localhost
gather_facts: false
tasks:
- name: Get pod
k8s_info:
kind: pods
api: v1
namespace: openshift-migration
name: "{{ lookup( 'env', 'HOSTNAME') }}"
register: pods
- name: Print pod name
debug:
msg: "{{ pods.resources[0].metadata.name }}"
You can use the fail
module to produce a non-zero exit status in cases where a non-zero exit status would not normally be produced, ensuring that the success or failure of a hook is detected. Hooks run as jobs and the success or failure status of a hook is based on the exit status of the job container.
Example fail
module
- hosts: localhost
gather_facts: false
tasks:
- name: Set a boolean
set_fact:
do_fail: true
- name: "fail"
fail:
msg: "Cause a failure"
when: do_fail
Environment variables
The MigPlan
CR name and migration namespaces are passed as environment variables to the hook container. These variables are accessed by using the lookup
plug-in.
Example environment variables
- hosts: localhost
gather_facts: false
tasks:
- set_fact:
namespaces: "{{ (lookup( 'env', 'migration_namespaces')).split(',') }}"
- debug:
msg: "{{ item }}"
with_items: "{{ namespaces }}"
- debug:
msg: "{{ lookup( 'env', 'migplan_name') }}"
Migration plan options
You can exclude, edit, and map components in the MigPlan
custom resource (CR).
Excluding resources
You can exclude resources, for example, image streams, persistent volumes (PVs), or subscriptions, from a Migration Toolkit for Containers (MTC) migration plan to reduce the resource load for migration or to migrate images or PVs with a different tool.
By default, the MTC excludes service catalog resources and Operator Lifecycle Manager (OLM) resources from migration. These resources are parts of the service catalog API group and the OLM API group, neither of which is supported for migration at this time.
Procedure
Edit the
MigrationController
custom resource manifest:$ oc edit migrationcontroller <migration_controller> -n openshift-migration
Update the
spec
section by adding parameters to exclude specific resources. For those resources that do not have their own exclusion parameters, add theadditional_excluded_resources
parameter:apiVersion: migration.openshift.io/v1alpha1
kind: MigrationController
metadata:
name: migration-controller
namespace: openshift-migration
spec:
disable_image_migration: true (1)
disable_pv_migration: true (2)
additional_excluded_resources: (3)
- resource1
- resource2
...
1 Add disable_image_migration: true
to exclude image streams from the migration.imagestreams
is added to theexcluded_resources
list inmain.yml
when theMigrationController
pod restarts.2 Add disable_pv_migration: true
to exclude PVs from the migration plan.persistentvolumes
andpersistentvolumeclaims
are added to theexcluded_resources
list inmain.yml
when theMigrationController
pod restarts. Disabling PV migration also disables PV discovery when you create the migration plan.3 You can add OKD resources that you want to exclude to the additional_excluded_resources
list.Wait two minutes for the
MigrationController
pod to restart so that the changes are applied.Verify that the resource is excluded:
$ oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1
The output contains the excluded resources:
Example output
name: EXCLUDED_RESOURCES
value:
resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims
Mapping namespaces
If you map namespaces in the MigPlan
custom resource (CR), you must ensure that the namespaces are not duplicated on the source or the destination clusters because the UID and GID ranges of the namespaces are copied during migration.
Two source namespaces mapped to the same destination namespace
spec:
namespaces:
- namespace_2
- namespace_1:namespace_2
If you want the source namespace to be mapped to a namespace of the same name, you do not need to create a mapping. By default, a source namespace and a target namespace have the same name.
Incorrect namespace mapping
spec:
namespaces:
- namespace_1:namespace_1
Correct namespace reference
spec:
namespaces:
- namespace_1
Excluding persistent volume claims
You select persistent volume claims (PVCs) for state migration by excluding the PVCs that you do not want to migrate. You exclude PVCs by setting the spec.persistentVolumes.pvc.selection.action
parameter of the MigPlan
custom resource (CR) after the persistent volumes (PVs) have been discovered.
Prerequisites
MigPlan
CR is in aReady
state.
Procedure
Add the
spec.persistentVolumes.pvc.selection.action
parameter to theMigPlan
CR and set it toskip
:apiVersion: migration.openshift.io/v1alpha1
kind: MigPlan
metadata:
name: <migplan>
namespace: openshift-migration
spec:
...
persistentVolumes:
- capacity: 10Gi
name: <pv_name>
pvc:
...
selection:
action: skip
Mapping persistent volume claims
You can migrate persistent volume (PV) data from the source cluster to persistent volume claims (PVCs) that are already provisioned in the destination cluster in the MigPlan
CR by mapping the PVCs. This mapping ensures that the destination PVCs of migrated applications are synchronized with the source PVCs.
You map PVCs by updating the spec.persistentVolumes.pvc.name
parameter in the MigPlan
custom resource (CR) after the PVs have been discovered.
Prerequisites
MigPlan
CR is in aReady
state.
Procedure
Update the
spec.persistentVolumes.pvc.name
parameter in theMigPlan
CR:apiVersion: migration.openshift.io/v1alpha1
kind: MigPlan
metadata:
name: <migplan>
namespace: openshift-migration
spec:
...
persistentVolumes:
- capacity: 10Gi
name: <pv_name>
pvc:
name: <source_pvc>:<destination_pvc> (1)
1 Specify the PVC on the source cluster and the PVC on the destination cluster. If the destination PVC does not exist, it will be created. You can use this mapping to change the PVC name during migration.
Editing persistent volume attributes
After you create a MigPlan
custom resource (CR), the MigrationController
CR discovers the persistent volumes (PVs). The spec.persistentVolumes
block and the status.destStorageClasses
block are added to the MigPlan
CR.
You can edit the values in the spec.persistentVolumes.selection
block. If you change values outside the spec.persistentVolumes.selection
block, the values are overwritten when the MigPlan
CR is reconciled by the MigrationController
CR.
The default value for the
You can change the If the |
Prerequisites
MigPlan
CR is in aReady
state.
Procedure
Edit the
spec.persistentVolumes.selection
values in theMigPlan
CR:apiVersion: migration.openshift.io/v1alpha1
kind: MigPlan
metadata:
name: <migplan>
namespace: openshift-migration
spec:
persistentVolumes:
- capacity: 10Gi
name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4
proposedCapacity: 10Gi
pvc:
accessModes:
- ReadWriteMany
hasReference: true
name: mysql
namespace: mysql-persistent
selection:
action: <copy> (1)
copyMethod: <filesystem> (2)
verify: true (3)
storageClass: <gp2> (4)
accessMode: <ReadWriteMany> (5)
storageClass: cephfs
1 Allowed values are move
,copy
, andskip
. If only one action is supported, the default value is the supported action. If multiple actions are supported, the default value iscopy
.2 Allowed values are snapshot
andfilesystem
. Default value isfilesystem
.3 The verify
parameter is displayed if you select the verification option for file system copy in the MTC web console. You can set it tofalse
.4 You can change the default value to the value of any name
parameter in thestatus.destStorageClasses
block of theMigPlan
CR. If no value is specified, the PV will have no storage class after migration.5 Allowed values are ReadWriteOnce
andReadWriteMany
. If this value is not specified, the default is the access mode of the source cluster PVC. You can only edit the access mode in theMigPlan
CR. You cannot edit it by using the MTC web console.
Additional resources
For details about the
move
andcopy
actions, see MTC workflow.For details about the
skip
action, see Excluding PVCs from migration.For details about the file system and snapshot copy methods, see About data copy methods.
Migrating Kubernetes objects
You can perform a one-time migration of Kubernetes objects that constitute an application’s state.
After migration, the |
You add Kubernetes objects to the MigPlan
CR by using one of the following options:
Adding the Kubernetes objects to the
includedResources
section.Using the
labelSelector
parameter to reference labeled Kubernetes objects.Adding Kubernetes objects to the
includedResources
section and then filtering them with thelabelSelector
parameter, for example,Secret
andConfigMap
resources with the labelapp: frontend
.
Procedure
Update the
MigPlan
CR:apiVersion: migration.openshift.io/v1alpha1
kind: MigPlan
metadata:
name: <migplan>
namespace: openshift-migration
spec:
includedResources:
- kind: <kind> (1)
group: ""
- kind: <kind>
group: ""
...
labelSelector:
matchLabels:
<label> (2)
1 Specify the Kubernetes object, for example, Secret
orConfigMap
.2 Specify the label of the resources to migrate, for example, app: frontend
.
Migration controller options
You can edit migration plan limits, enable persistent volume resizing, or enable cached Kubernetes clients in the MigrationController
custom resource (CR) for large migrations and improved performance.
Increasing limits for large migrations
You can increase the limits on migration objects and container resources for large migrations with the Migration Toolkit for Containers (MTC).
You must test these changes before you perform a migration in a production environment. |
Procedure
Edit the
MigrationController
custom resource (CR) manifest:$ oc edit migrationcontroller -n openshift-migration
Update the following parameters:
...
mig_controller_limits_cpu: "1" (1)
mig_controller_limits_memory: "10Gi" (2)
...
mig_controller_requests_cpu: "100m" (3)
mig_controller_requests_memory: "350Mi" (4)
...
mig_pv_limit: 100 (5)
mig_pod_limit: 100 (6)
mig_namespace_limit: 10 (7)
...
1 Specifies the number of CPUs available to the MigrationController
CR.2 Specifies the amount of memory available to the MigrationController
CR.3 Specifies the number of CPU units available for MigrationController
CR requests.100m
represents 0.1 CPU units (100 * 1e-3).4 Specifies the amount of memory available for MigrationController
CR requests.5 Specifies the number of persistent volumes that can be migrated. 6 Specifies the number of pods that can be migrated. 7 Specifies the number of namespaces that can be migrated. Create a migration plan that uses the updated parameters to verify the changes.
If your migration plan exceeds the
MigrationController
CR limits, the MTC console displays a warning message when you save the migration plan.
Enabling persistent volume resizing for direct volume migration
You can enable persistent volume (PV) resizing for direct volume migration to avoid running out of disk space on the destination cluster.
When the disk usage of a PV reaches a configured level, the MigrationController
custom resource (CR) compares the requested storage capacity of a persistent volume claim (PVC) to its actual provisioned capacity. Then, it calculates the space required on the destination cluster.
A pv_resizing_threshold
parameter determines when PV resizing is used. The default threshold is 3%
. This means that PV resizing occurs when the disk usage of a PV is more than 97%
. You can increase this threshold so that PV resizing occurs at a lower disk usage level.
PVC capacity is calculated according to the following criteria:
If the requested storage capacity (
spec.resources.requests.storage
) of the PVC is not equal to its actual provisioned capacity (status.capacity.storage
), the greater value is used.If a PV is provisioned through a PVC and then subsequently changed so that its PV and PVC capacities no longer match, the greater value is used.
Prerequisites
- The PVCs must be attached to one or more running pods so that the
MigrationController
CR can execute commands.
Procedure
Log in to the host cluster.
Enable PV resizing by patching the
MigrationController
CR:$ oc patch migrationcontroller migration-controller -p '{"spec":{"enable_dvm_pv_resizing":true}}' \ (1)
--type='merge' -n openshift-migration
1 Set the value to false
to disable PV resizing.Optional: Update the
pv_resizing_threshold
parameter to increase the threshold:$ oc patch migrationcontroller migration-controller -p '{"spec":{"pv_resizing_threshold":41}}' \ (1)
--type='merge' -n openshift-migration
1 The default value is 3
.When the threshold is exceeded, the following status message is displayed in the
MigPlan
CR status:status:
conditions:
...
- category: Warn
durable: true
lastTransitionTime: "2021-06-17T08:57:01Z"
message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]'
reason: Done
status: "False"
type: PvCapacityAdjustmentRequired
For AWS gp2 storage, this message does not appear unless the
pv_resizing_threshold
is 42% or greater because of the way gp2 calculates volume usage and size. (BZ#1973148)
Enabling cached Kubernetes clients
You can enable cached Kubernetes clients in the MigrationController
custom resource (CR) for improved performance during migration. The greatest performance benefit is displayed when migrating between clusters in different regions or with significant network latency.
Delegated tasks, for example, Rsync backup for direct volume migration or Velero backup and restore, however, do not show improved performance with cached clients. |
Cached clients require extra memory because the MigrationController
CR caches all API resources that are required for interacting with MigCluster
CRs. Requests that are normally sent to the API server are directed to the cache instead. The cache watches the API server for updates.
You can increase the memory limits and requests of the MigrationController
CR if OOMKilled
errors occur after you enable cached clients.
Procedure
Enable cached clients by running the following command:
$ oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \
'[{ "op": "replace", "path": "/spec/mig_controller_enable_cache", "value": true}]'
Optional: Increase the
MigrationController
CR memory limits by running the following command:$ oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \
'[{ "op": "replace", "path": "/spec/mig_controller_limits_memory", "value": <10Gi>}]'
Optional: Increase the
MigrationController
CR memory requests by running the following command:$ oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \
'[{ "op": "replace", "path": "/spec/mig_controller_requests_memory", "value": <350Mi>}]'