Migration From Kubefed

Karmada is developed in continuation of Kubernetes Federation v1 and Federation v2(aka Kubefed). Karmada inherited a lot of concepts from these two versions. For example:

  • Resource template: Karmada uses Kubernetes Native API definition for federated resource template, to make it easy to integrate with existing tools that already adopt Kubernetes.
  • Propagation Policy: Karmada offers a standalone Propagation(placement) Policy API to define multi-cluster scheduling and spreading requirements.
  • Override Policy: Karmada provides a standalone Override Policy API for specializing cluster relevant configuration automation.

Most of the features in Kubefed have been reformed in Karmada, so Karmada would be the natural successor.

Generally speaking, migrating from Kubefed to Karmada would be pretty easy. This document outlines the basic migration path for Kubefed users. Note: This document is a work in progress, any feedback would be welcome.

Cluster Registration

Kubefed provides join and unjoin commands in kubefedctl command line tool, Karmada also implemented the two commands in karmadactl.

Refer to Kubefed Cluster Registration, and Karmada Cluster Registration for more details.

Joining Clusters

Assume you use the kubefedctl tool to join cluster as follows:

  1. kubefedctl join cluster1 --cluster-context cluster1 --host-cluster-context cluster1

Now with Karmada, you can use karmadactl tool to do the same thing:

  1. karmadactl join cluster1 --cluster-context cluster1 --karmada-context karmada

The behavior behind the join command is similar between Kubefed and Karmada. For Kubefed, it will create a KubeFedCluster, object and Karmada will create a Cluster object to describe the joined cluster.

Checking status of joined clusters

Assume you use the kubefedctl tool to check the status of the joined clusters as follows:

  1. $ kubectl -n kube-federation-system get kubefedclusters
  2. NAME AGE READY KUBERNETES-VERSION
  3. cluster1 1m True v1.21.2
  4. cluster2 1m True v1.22.0

Now with Karmada, you can use karmadactl tool to do the same thing:

  1. $ kubectl get clusters
  2. NAME VERSION MODE READY AGE
  3. member1 v1.20.7 Push True 66s

Kubefed manages clusters with Push mode, however Karmada supports both Push and Pull modes. Refer to Overview of cluster mode for more details.

Unjoining clusters

Assume you use the kubefedctl tool to unjoin as follows:

  1. kubefedctl unjoin cluster2 --cluster-context cluster2 --host-cluster-context cluster1

Now with Karmada, you can use karmadactl tool to do the same thing:

  1. karmadactl unjoin cluster2 --cluster-context cluster2 --karmada-context karmada

The behavior behind the unjoin command is similar between Kubefed and Karmada, they both remove the cluster from the control plane by removing the cluster object.

Propagating workload to clusters

Assume you are going to propagate a workload (Deployment) to both clusters named cluster1 and cluster2, you might have to deploy following yaml to Kubefed:

  1. apiVersion: types.kubefed.io/v1beta1
  2. kind: FederatedDeployment
  3. metadata:
  4. name: test-deployment
  5. namespace: test-namespace
  6. spec:
  7. template:
  8. metadata:
  9. labels:
  10. app: nginx
  11. spec:
  12. replicas: 3
  13. selector:
  14. matchLabels:
  15. app: nginx
  16. template:
  17. metadata:
  18. labels:
  19. app: nginx
  20. spec:
  21. containers:
  22. - image: nginx
  23. name: nginx
  24. placement:
  25. clusters:
  26. - name: cluster2
  27. - name: cluster1
  28. overrides:
  29. - clusterName: cluster2
  30. clusterOverrides:
  31. - path: "/spec/replicas"
  32. value: 5
  33. - path: "/spec/template/spec/containers/0/image"
  34. value: "nginx:1.17.0-alpine"
  35. - path: "/metadata/annotations"
  36. op: "add"
  37. value:
  38. foo: bar
  39. - path: "/metadata/annotations/foo"
  40. op: "remove"

Now with Karmada, the yaml could be split into 3 yamls, one for each of the template, placement and overrides.

In Karmada, the template doesn’t need to embed into Federated CRD, it just the same as Kubernetes native declaration:

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: nginx
  5. labels:
  6. app: nginx
  7. spec:
  8. replicas: 3
  9. selector:
  10. matchLabels:
  11. app: nginx
  12. template:
  13. metadata:
  14. labels:
  15. app: nginx
  16. spec:
  17. containers:
  18. - image: nginx
  19. name: nginx

For the placement part, Karmada provides PropagationPolicy API to hold the placement rules:

  1. apiVersion: policy.karmada.io/v1alpha1
  2. kind: PropagationPolicy
  3. metadata:
  4. name: nginx-propagation
  5. spec:
  6. resourceSelectors:
  7. - apiVersion: apps/v1
  8. kind: Deployment
  9. name: nginx
  10. placement:
  11. clusterAffinity:
  12. clusterNames:
  13. - cluster1
  14. - cluster2

The PropagationPolicy defines the rules of which resources(resourceSelectors) should be propagated to where (placement). See Resource Propagating for more details.

For the override part, Karmada provides OverridePolicy API to hold the rules for differentiation:

  1. apiVersion: policy.karmada.io/v1alpha1
  2. kind: OverridePolicy
  3. metadata:
  4. name: example-override
  5. namespace: default
  6. spec:
  7. resourceSelectors:
  8. - apiVersion: apps/v1
  9. kind: Deployment
  10. name: nginx
  11. overrideRules:
  12. - targetCluster:
  13. clusterNames:
  14. - cluster2
  15. overriders:
  16. plaintext:
  17. - path: "/spec/replicas"
  18. operator: replace
  19. value: 5
  20. - path: "/metadata/annotations"
  21. operator: add
  22. value:
  23. foo: bar
  24. - path: "/metadata/annotations/foo"
  25. operator: remove
  26. imageOverrider:
  27. - component: Tag
  28. operator: replace
  29. value: 1.17.0-alpine

The OverridePolicy defines the rules of which resources(resourceSelectors) should be overwritten when propagating to where(targetCluster).

In addition to Kubefed, Karmada offers various alternatives to declare the override rules, see Overriders for more details.

FAQ

Will Karmada provide tools to smooth the migration?

We don’t have the plan yet, as we reached some Kubefed users and found that they’re usually not using vanilla Kubefed but the forked version, they extended Kubefed a lot to meet their requirements. So, it might be pretty hard to maintain a common tool to satisfy most users.

We are also looking forward to more feedback now, please feel free to reach us, and we are glad to support you finish the migration.