Advanced migration options
You can automate your migrations and modify the MigPlan
and MigrationController
custom resources in order to perform large-scale migrations and to improve performance.
MTC terminology
Term | Definition |
---|---|
Source cluster | Cluster from which the applications are migrated. |
Destination cluster[1] | Cluster to which the applications are migrated. |
Replication repository | Object storage used for copying images, volumes, and Kubernetes objects during indirect migration or for Kubernetes objects during direct volume migration or direct image migration. The replication repository must be accessible to all clusters. |
Host cluster | Cluster on which the The host cluster does not require an exposed registry route for direct image migration. |
Remote cluster | A remote cluster is usually the source cluster but this is not required. A remote cluster requires a A remote cluster requires an exposed secure registry route for direct image migration. |
Indirect migration | Images, volumes, and Kubernetes objects are copied from the source cluster to the replication repository and then from the replication repository to the destination cluster. |
Direct volume migration | Persistent volumes are copied directly from the source cluster to the destination cluster. |
Direct image migration | Images are copied directly from the source cluster to the destination cluster. |
Stage migration | Data is copied to the destination cluster without stopping the application. Running a stage migration multiple times reduces the duration of the cutover migration. |
Cutover migration | The application is stopped on the source cluster and its resources are migrated to the destination cluster. |
State migration | Application state is migrated by copying specific persistent volume claims and Kubernetes objects to the destination cluster. |
Rollback migration | Rollback migration rolls back a completed migration. |
1 Called the target cluster in the MTC web console.
Migrating applications by using the CLI
You can migrate applications with the MTC API by using the command line interface (CLI) in order to automate the migration.
Migration prerequisites
- You must be logged in as a user with
cluster-admin
privileges on all clusters.
Direct image migration
You must ensure that the secure internal registry of the source cluster is exposed.
You must create a route to the exposed registry.
Direct volume migration
- If your clusters use proxies, you must configure an Stunnel TCP proxy.
Internal images
If your application uses internal images from the
openshift
namespace, you must ensure that the required versions of the images are present on the target cluster.You can manually update an image stream tag in order to use a deprecated OKD 3 image on an OKD 4.6 cluster.
Clusters
The source cluster must be upgraded to the latest MTC z-stream release.
The MTC version must be the same on all clusters.
Network
The clusters have unrestricted network access to each other and to the replication repository.
If you copy the persistent volumes with
move
, the clusters must have unrestricted network access to the remote volumes.You must enable the following ports on an OKD 3 cluster:
8443
(API server)443
(routes)53
(DNS)
You must enable the following ports on an OKD 4 cluster:
6443
(API server)443
(routes)53
(DNS)
You must enable port
443
on the replication repository if you are using TLS.
Persistent volumes (PVs)
The PVs must be valid.
The PVs must be bound to persistent volume claims.
If you use snapshots to copy the PVs, the following additional prerequisites apply:
The cloud provider must support snapshots.
The PVs must have the same cloud provider.
The PVs must be located in the same geographic region.
The PVs must have the same storage class.
Creating a registry route for direct image migration
For direct image migration, you must create a route to the exposed internal registry on all remote clusters.
Prerequisites
The internal registry must be exposed to external traffic on all remote clusters.
The OKD 4 registry is exposed by default.
The OKD 3 registry must be exposed manually.
Procedure
To create a route to an OKD 3 registry, run the following command:
$ oc create route passthrough --service=docker-registry -n default
To create a route to an OKD 4 registry, run the following command:
$ oc create route passthrough --service=image-registry -n openshift-image-registry
Configuring proxies
For OKD 4.1 and earlier versions, you must configure proxies in the MigrationController
custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy
object.
For OKD 4.2 to 4.6, the Migration Toolkit for Containers (MTC) inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings.
You must configure the proxies to allow the SPDY protocol and to forward the Upgrade HTTP
header to the API server. Otherwise, an Upgrade request required
error is displayed. The MigrationController
CR uses SPDY to run commands within remote pods. The Upgrade HTTP
header is required in order to open a websocket connection with the API server.
Direct volume migration
If you are performing a direct volume migration (DVM) from a source cluster behind a proxy, you must configure an Stunnel proxy. Stunnel creates a transparent tunnel between the source and target clusters for the TCP connection without changing the certificates.
DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy.
Prerequisites
- You must be logged in as a user with
cluster-admin
privileges on all clusters.
Procedure
Get the
MigrationController
CR manifest:$ oc get migrationcontroller <migration_controller> -n openshift-migration
Update the proxy parameters:
apiVersion: migration.openshift.io/v1alpha1
kind: MigrationController
metadata:
name: <migration_controller>
namespace: openshift-migration
...
spec:
stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> (1)
httpProxy: http://<username>:<password>@<ip>:<port> (2)
httpsProxy: http://<username>:<password>@<ip>:<port> (3)
noProxy: example.com (4)
1 Stunnel proxy URL for direct volume migration. 2 Proxy URL for creating HTTP connections outside the cluster. The URL scheme must be http
.3 Proxy URL for creating HTTPS connections outside the cluster. If this is not specified, then httpProxy
is used for both HTTP and HTTPS connections.4 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with
.
to match subdomains only. For example,.y.com
matchesx.y.com
, but noty.com
. Use*
to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by thenetworking.machineNetwork[].cidr
field from the installation configuration, you must add them to this list to prevent connection issues.This field is ignored if neither the
httpProxy
nor thehttpsProxy
field is set.Save the manifest as
migration-controller.yaml
.Apply the updated manifest:
$ oc replace -f migration-controller.yaml -n openshift-migration
Migrating an application from the command line
You can migrate an application from the command line by using the Migration Toolkit for Containers (MTC) API.
Procedure
Create a
MigCluster
CR manifest for the host cluster:$ cat << EOF | oc apply -f -
apiVersion: migration.openshift.io/v1alpha1
kind: MigCluster
metadata:
name: <host_cluster>
namespace: openshift-migration
spec:
isHostCluster: true
EOF
Create a
Secret
CR manifest for each remote cluster:$ cat << EOF | oc apply -f -
apiVersion: v1
kind: Secret
metadata:
name: <cluster_secret>
namespace: openshift-config
type: Opaque
data:
saToken: <sa_token> (1)
EOF
1 Specify the base64-encoded migration-controller
service account (SA) token of the remote cluster. You can obtain the token by running the following command:$ oc sa get-token migration-controller -n openshift-migration | base64 -w 0
Create a
MigCluster
CR manifest for each remote cluster:$ cat << EOF | oc apply -f -
apiVersion: migration.openshift.io/v1alpha1
kind: MigCluster
metadata:
name: <remote_cluster> (1)
namespace: openshift-migration
spec:
exposedRegistryPath: <exposed_registry_route> (2)
insecure: false (3)
isHostCluster: false
serviceAccountSecretRef:
name: <remote_cluster_secret> (4)
namespace: openshift-config
url: <remote_cluster_url> (5)
EOF
1 Specify the Cluster
CR of the remote cluster.2 Optional: For direct image migration, specify the exposed registry route. 3 SSL verification is enabled if false
. CA certificates are not required or checked iftrue
.4 Specify the Secret
CR of the remote cluster.5 Specify the URL of the remote cluster. Verify that all clusters are in a
Ready
state:$ oc describe cluster <cluster>
Create a
Secret
CR manifest for the replication repository:$ cat << EOF | oc apply -f -
apiVersion: v1
kind: Secret
metadata:
namespace: openshift-config
name: <migstorage_creds>
type: Opaque
data:
aws-access-key-id: <key_id_base64> (1)
aws-secret-access-key: <secret_key_base64> (2)
EOF
1 Specify the key ID in base64 format. 2 Specify the secret key in base64 format. AWS credentials are base64-encoded by default. For other storage providers, you must encode your credentials by running the following command with each key:
$ echo -n "<key>" | base64 -w 0 (1)
1 Specify the key ID or the secret key. Both keys must be base64-encoded. Create a
MigStorage
CR manifest for the replication repository:$ cat << EOF | oc apply -f -
apiVersion: migration.openshift.io/v1alpha1
kind: MigStorage
metadata:
name: <migstorage>
namespace: openshift-migration
spec:
backupStorageConfig:
awsBucketName: <bucket> (1)
credsSecretRef:
name: <storage_secret> (2)
namespace: openshift-config
backupStorageProvider: <storage_provider> (3)
volumeSnapshotConfig:
credsSecretRef:
name: <storage_secret> (4)
namespace: openshift-config
volumeSnapshotProvider: <storage_provider> (5)
EOF
1 Specify the bucket name. 2 Specify the Secrets
CR of the object storage. You must ensure that the credentials stored in theSecrets
CR of the object storage are correct.3 Specify the storage provider. 4 Optional: If you are copying data by using snapshots, specify the Secrets
CR of the object storage. You must ensure that the credentials stored in theSecrets
CR of the object storage are correct.5 Optional: If you are copying data by using snapshots, specify the storage provider. Verify that the
MigStorage
CR is in aReady
state:$ oc describe migstorage <migstorage>
Create a
MigPlan
CR manifest:$ cat << EOF | oc apply -f -
apiVersion: migration.openshift.io/v1alpha1
kind: MigPlan
metadata:
name: <migplan>
namespace: openshift-migration
spec:
destMigClusterRef:
name: <host_cluster>
namespace: openshift-migration
indirectImageMigration: true (1)
indirectVolumeMigration: true (2)
migStorageRef:
name: <migstorage> (3)
namespace: openshift-migration
namespaces:
- <application_namespace> (4)
srcMigClusterRef:
name: <remote_cluster> (5)
namespace: openshift-migration
EOF
1 Direct image migration is enabled if false
.2 Direct volume migration is enabled if false
.3 Specify the name of the MigStorage
CR instance.4 Specify one or more source namespaces. By default, the destination namespace has the same name. 5 Specify the name of the source cluster MigCluster
instance.Verify that the
MigPlan
instance is in aReady
state:$ oc describe migplan <migplan> -n openshift-migration
Create a
MigMigration
CR manifest to start the migration defined in theMigPlan
instance:$ cat << EOF | oc apply -f -
apiVersion: migration.openshift.io/v1alpha1
kind: MigMigration
metadata:
name: <migmigration>
namespace: openshift-migration
spec:
migPlanRef:
name: <migplan> (1)
namespace: openshift-migration
quiescePods: true (2)
stage: false (3)
rollback: false (4)
EOF
1 Specify the MigPlan
CR name.2 The pods on the source cluster are stopped before migration if true
.3 A stage migration, which copies most of the data without stopping the application, is performed if true
.4 A completed migration is rolled back if true
.Verify the migration by watching the
MigMigration
CR progress:$ oc watch migmigration <migmigration> -n openshift-migration
The output resembles the following:
Example output
Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc
Namespace: openshift-migration
Labels: migration.openshift.io/migplan-name=django
Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c
API Version: migration.openshift.io/v1alpha1
Kind: MigMigration
...
Spec:
Mig Plan Ref:
Name: migplan
Namespace: openshift-migration
Stage: false
Status:
Conditions:
Category: Advisory
Last Transition Time: 2021-02-02T15:04:09Z
Message: Step: 19/47
Reason: InitialBackupCreated
Status: True
Type: Running
Category: Required
Last Transition Time: 2021-02-02T15:03:19Z
Message: The migration is ready.
Status: True
Type: Ready
Category: Required
Durable: true
Last Transition Time: 2021-02-02T15:04:05Z
Message: The migration registries are healthy.
Status: True
Type: RegistriesHealthy
Itinerary: Final
Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5
Phase: InitialBackupCreated
Pipeline:
Completed: 2021-02-02T15:04:07Z
Message: Completed
Name: Prepare
Started: 2021-02-02T15:03:18Z
Message: Waiting for initial Velero backup to complete.
Name: Backup
Phase: InitialBackupCreated
Progress:
Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s)
Started: 2021-02-02T15:04:07Z
Message: Not started
Name: StageBackup
Message: Not started
Name: StageRestore
Message: Not started
Name: DirectImage
Message: Not started
Name: DirectVolume
Message: Not started
Name: Restore
Message: Not started
Name: Cleanup
Start Timestamp: 2021-02-02T15:03:18Z
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Running 57s migmigration_controller Step: 2/47
Normal Running 57s migmigration_controller Step: 3/47
Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47
Normal Running 54s migmigration_controller Step: 5/47
Normal Running 54s migmigration_controller Step: 6/47
Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47
Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47
Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready.
Normal Running 50s migmigration_controller Step: 9/47
Normal Running 50s migmigration_controller Step: 10/47
Migrating an application’s state
You can perform repeatable, state-only migrations by selecting specific persistent volume claims (PVCs). During a state migration, Migration Toolkit for Containers (MTC) copies persistent volume (PV) data to the target cluster. PV references are not moved. The application pods continue to run on the source cluster.
If you have a CI/CD pipeline, you can migrate stateless components by deploying them on the target cluster. Then you can migrate stateful components by using MTC.
You can use state migration to migrate namespaces within the same cluster.
Do not use state migration to migrate namespaces between clusters. Use stage or cutover migration instead. |
You can migrate PV data from the source cluster to PVCs that are already provisioned in the target cluster by mapping PVCs in the MigPlan
CR. This ensures that the target PVCs of migrated applications are synchronized with the source PVCs.
You can perform a one-time migration of Kubernetes objects that store application state.
Excluding persistent volume claims
You can exclude persistent volume claims (PVCs) by adding the spec.persistentVolumes.pvc.selection.action
parameter to the MigPlan
custom resource (CR) after the persistent volumes (PVs) have been discovered.
Prerequisites
MigPlan
CR with discovered PVs.
Procedure
Add the
spec.persistentVolumes.pvc.selection.action
parameter to theMigPlan
CR and set its value toskip
:apiVersion: migration.openshift.io/v1alpha1
kind: MigPlan
metadata:
name: <migplan>
namespace: openshift-migration
spec:
...
persistentVolumes:
- capacity: 10Gi
name: <pv_name>
pvc:
...
selection:
action: skip (1)
1 skip
excludes the PVC from the migration plan.
Mapping persistent volume claims
You can map persistent volume claims (PVCs) by updating the spec.persistentVolumes.pvc.name
parameter in the MigPlan
custom resource (CR) after the persistent volumes (PVs) have been discovered.
Prerequisites
MigPlan
CR with discovered PVs.
Procedure
Update the
spec.persistentVolumes.pvc.name
parameter in theMigPlan
CR:apiVersion: migration.openshift.io/v1alpha1
kind: MigPlan
metadata:
name: <migplan>
namespace: openshift-migration
spec:
...
persistentVolumes:
- capacity: 10Gi
name: <pv_name>
pvc:
name: <source_pvc>:<destination_pvc> (1)
1 Specify the PVC on the source cluster and the PVC on the destination cluster. If the destination PVC does not exist, it will be created. You can use this mapping to change the PVC name during migration.
Migrating Kubernetes objects
You can perform a one-time migration of Kubernetes objects that constitute an application’s state.
After migration, the |
You add Kubernetes objects to the MigPlan
CR by using the following options:
Adding the Kubernetes objects to the
includedResources
section.Using the
labelSelector
parameter to reference labeled Kubernetes objects.
If you set both parameters, the label is used to filter the included resources, for example, to migrate Secret
and ConfigMap
resources with the label app: frontend
.
Procedure
Update the
MigPlan
CR:apiVersion: migration.openshift.io/v1alpha1
kind: MigPlan
metadata:
name: <migplan>
namespace: openshift-migration
spec:
includedResources: (1)
- kind: <Secret>
group: ""
- kind: <ConfigMap>
group: ""
...
labelSelector:
matchLabels:
<app: frontend> (2)
1 Specify the kind
andgroup
of each resource.2 Specify the label of the resources to migrate.
Migration hooks
You can add up to four migration hooks to a single migration plan, with each hook running at a different phase of the migration. Migration hooks perform tasks such as customizing application quiescence, manually migrating unsupported data types, and updating applications after migration.
A migration hook runs on a source or a target cluster at one of the following migration steps:
PreBackup
: Before resources are backed up on the source cluster.PostBackup
: After resources are backed up on the source cluster.PreRestore
: Before resources are restored on the target cluster.PostRestore
: After resources are restored on the target cluster.
You can create a hook by creating an Ansible playbook that runs with the default Ansible image or with a custom hook container.
Ansible playbook
The Ansible playbook is mounted on a hook container as a config map. The hook container runs as a job, using the cluster, service account, and namespace specified in the MigPlan
custom resource. The job continues to run until it reaches the default limit of 6 retries or a successful completion. This continues even if the initial pod is evicted or killed.
The default Ansible runtime image is registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel7:1.6
. This image is based on the Ansible Runner image and includes python-openshift
for Ansible Kubernetes resources and an updated oc
binary.
Custom hook container
You can use a custom hook container instead of the default Ansible image.
Writing an Ansible playbook for a migration hook
You can write an Ansible playbook to use as a migration hook. The hook is added to a migration plan by using the MTC web console or by specifying values for the spec.hooks
parameters in the MigPlan
custom resource (CR) manifest.
The Ansible playbook is mounted onto a hook container as a config map. The hook container runs as a job, using the cluster, service account, and namespace specified in the MigPlan
CR. The hook container uses a specified service account token so that the tasks do not require authentication before they run in the cluster.
Ansible modules
You can use the Ansible shell
module to run oc
commands.
Example shell
module
- hosts: localhost
gather_facts: false
tasks:
- name: get pod name
shell: oc get po --all-namespaces
You can use kubernetes.core
modules, such as k8s_info
, to interact with Kubernetes resources.
Example k8s_facts
module
- hosts: localhost
gather_facts: false
tasks:
- name: Get pod
k8s_info:
kind: pods
api: v1
namespace: openshift-migration
name: "{{ lookup( 'env', 'HOSTNAME') }}"
register: pods
- name: Print pod name
debug:
msg: "{{ pods.resources[0].metadata.name }}"
You can use the fail
module to produce a non-zero exit status in cases where a non-zero exit status would not normally be produced, ensuring that the success or failure of a hook is detected. Hooks run as jobs and the success or failure status of a hook is based on the exit status of the job container.
Example fail
module
- hosts: localhost
gather_facts: false
tasks:
- name: Set a boolean
set_fact:
do_fail: true
- name: "fail"
fail:
msg: "Cause a failure"
when: do_fail
Environment variables
The MigPlan
CR name and migration namespaces are passed as environment variables to the hook container. These variables are accessed by using the lookup
plug-in.
Example environment variables
- hosts: localhost
gather_facts: false
tasks:
- set_fact:
namespaces: "{{ (lookup( 'env', 'migration_namespaces')).split(',') }}"
- debug:
msg: "{{ item }}"
with_items: "{{ namespaces }}"
- debug:
msg: "{{ lookup( 'env', 'migplan_name') }}"
Configuration options
You can configure the following options for the MigPlan
and MigrationController
custom resources (CRs) to perform large-scale migrations and to improve performance.
Increasing limits for large migrations
You can increase the limits on migration objects and container resources for large migrations with the Migration Toolkit for Containers (MTC).
You must test these changes before you perform a migration in a production environment. |
Procedure
Edit the
MigrationController
custom resource (CR) manifest:$ oc edit migrationcontroller -n openshift-migration
Update the following parameters:
...
mig_controller_limits_cpu: "1" (1)
mig_controller_limits_memory: "10Gi" (2)
...
mig_controller_requests_cpu: "100m" (3)
mig_controller_requests_memory: "350Mi" (4)
...
mig_pv_limit: 100 (5)
mig_pod_limit: 100 (6)
mig_namespace_limit: 10 (7)
...
1 Specifies the number of CPUs available to the MigrationController
CR.2 Specifies the amount of memory available to the MigrationController
CR.3 Specifies the number of CPU units available for MigrationController
CR requests.100m
represents 0.1 CPU units (100 * 1e-3).4 Specifies the amount of memory available for MigrationController
CR requests.5 Specifies the number of persistent volumes that can be migrated. 6 Specifies the number of pods that can be migrated. 7 Specifies the number of namespaces that can be migrated. Create a migration plan that uses the updated parameters to verify the changes.
If your migration plan exceeds the
MigrationController
CR limits, the MTC console displays a warning message when you save the migration plan.
Excluding resources from a migration plan
You can exclude resources, for example, image streams, persistent volumes (PVs), or subscriptions, from a Migration Toolkit for Containers (MTC) migration plan in order to reduce the resource load for migration or to migrate images or PVs with a different tool.
By default, the MTC excludes service catalog resources and Operator Lifecycle Manager (OLM) resources from migration. These resources are parts of the service catalog API group and the OLM API group, neither of which is supported for migration at this time.
Procedure
Edit the
MigrationController
custom resource manifest:$ oc edit migrationcontroller <migration_controller> -n openshift-migration
Update the
spec
section by adding a parameter to exclude specific resources or by adding a resource to theexcluded_resources
parameter if it does not have its own exclusion parameter:apiVersion: migration.openshift.io/v1alpha1
kind: MigrationController
metadata:
name: migration-controller
namespace: openshift-migration
spec:
disable_image_migration: true (1)
disable_pv_migration: true (2)
...
excluded_resources: (3)
- imagetags
- templateinstances
- clusterserviceversions
- packagemanifests
- subscriptions
- servicebrokers
- servicebindings
- serviceclasses
- serviceinstances
- serviceplans
- operatorgroups
- events
- events.events.k8s.io
1 Add disable_image_migration: true
to exclude image streams from the migration. Do not edit theexcluded_resources
parameter.imagestreams
is added toexcluded_resources
when theMigrationController
pod restarts.2 Add disable_pv_migration: true
to exclude PVs from the migration plan. Do not edit theexcluded_resources
parameter.persistentvolumes
andpersistentvolumeclaims
are added toexcluded_resources
when theMigrationController
pod restarts. Disabling PV migration also disables PV discovery when you create the migration plan.3 You can add OKD resources to the excluded_resources
list. Do not delete the default excluded resources. These resources are problematic to migrate and must be excluded.Wait two minutes for the
MigrationController
pod to restart so that the changes are applied.Verify that the resource is excluded:
$ oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1
The output contains the excluded resources:
Example output
- name: EXCLUDED_RESOURCES
value:
imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims
Enabling persistent volume resizing for direct volume migration
You can enable persistent volume (PV) resizing for direct volume migration to avoid running out of disk space on the destination cluster.
When the disk usage of a PV reaches a configured level, the MigrationController
custom resource (CR) compares the requested storage capacity of a persistent volume claim (PVC) to its actual provisioned capacity. Then, it calculates the space required on the destination cluster.
A pv_resizing_threshold
parameter determines when PV resizing is used. The default threshold is 3%
. This means that PV resizing occurs when the disk usage of a PV is more than 97%
. You can increase this threshold so that PV resizing occurs at a lower disk usage level.
PVC capacity is calculated according to the following criteria:
If the requested storage capacity (
spec.resources.requests.storage
) of the PVC is not equal to its actual provisioned capacity (status.capacity.storage
), the greater value is used.If a PV is provisioned through a PVC and then subsequently changed so that its PV and PVC capacities no longer match, the greater value is used.
Prerequisites
- The PVCs must be attached to one or more running pods so that the
MigrationController
CR can execute commands.
Procedure
Log in to the host cluster.
Enable PV resizing by patching the
MigrationController
CR:$ oc patch migrationcontroller migration-controller -p '{"spec":{"enable_dvm_pv_resizing":true}}' \ (1)
--type='merge' -n openshift-migration
1 Set the value to false
to disable PV resizing.Optional: Update the
pv_resizing_threshold
parameter to increase the threshold:$ oc patch migrationcontroller migration-controller -p '{"spec":{"pv_resizing_threshold":41}}' \ (1)
--type='merge' -n openshift-migration
1 The default value is 3
.When the threshold is exceeded, the following status message is displayed in the
MigPlan
CR status:status:
conditions:
...
- category: Warn
durable: true
lastTransitionTime: "2021-06-17T08:57:01Z"
message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]'
reason: Done
status: "False"
type: PvCapacityAdjustmentRequired
For AWS gp2 storage, this message does not appear unless the
pv_resizing_threshold
is 42% or greater because of the way gp2 calculates volume usage and size. (BZ#1973148)
Enabling cached Kubernetes clients
You can enable cached Kubernetes clients in the MigrationController
custom resource (CR) for improved performance during migration. The greatest performance benefit is displayed when migrating between clusters in different regions or with significant network latency.
Delegated tasks, for example, Rsync backup for direct volume migration or Velero backup and restore, however, do not show improved performance with cached clients. |
Cached clients require extra memory because the MigrationController
CR caches all API resources that are required for interacting with MigCluster
CRs. Requests that are normally sent to the API server are directed to the cache instead. The cache watches the API server for updates.
You can increase the memory limits and requests of the MigrationController
CR if OOMKilled
errors occur after you enable cached clients.
Procedure
Enable cached clients by running the following command:
$ oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \
'[{ "op": "replace", "path": "/spec/mig_controller_enable_cache", "value": true}]'
Optional: Increase the
MigrationController
CR memory limits by running the following command:$ oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \
'[{ "op": "replace", "path": "/spec/mig_controller_limits_memory", "value": <10Gi>}]'
Optional: Increase the
MigrationController
CR memory requests by running the following command:$ oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \
'[{ "op": "replace", "path": "/spec/mig_controller_requests_memory", "value": <350Mi>}]'