Use Flux to support Helm chart propagation

Flux is most useful when used as a deployment tool at the end of a Continuous Delivery Pipeline. Flux will make sure that your new container images and config changes are propagated to the cluster. With Flux, Karmada can easily realize the ability to distribute applications packaged by Helm across clusters. Not only that, with Karmada’s OverridePolicy, users can customize applications for specific clusters and manage cross-cluster applications on the unified Karmada Control Plane.

Start up Karmada clusters

To start up Karmada, you can refer to here. If you just want to try Karmada, we recommend building a development environment by hack/local-up-karmada.sh.

  1. git clone https://github.com/karmada-io/karmada
  2. cd karmada
  3. hack/local-up-karmada.sh

After that, you will start a Kubernetes cluster by kind to run the Karmada Control Plane and create member clusters managed by Karmada.

  1. kubectl get clusters --kubeconfig ~/.kube/karmada.config

You can use the command above to check registered clusters, and you will get similar output as follows:

  1. NAME VERSION MODE READY AGE
  2. member1 v1.23.4 Push True 7m38s
  3. member2 v1.23.4 Push True 7m35s
  4. member3 v1.23.4 Pull True 7m27s

Start up Flux

In the Karmada Control Plane, you need to install Flux CRDs but do not need controllers to reconcile them. They are treated as resource templates, not specific resource instances. Based on the work API here, they will be encapsulated as a work object delivered to member clusters and reconciled by Flux controllers in member clusters, finally.

  1. kubectl apply -k github.com/fluxcd/flux2/manifests/crds?ref=main --kubeconfig ~/.kube/karmada.config

For testing purposes, we’ll install Flux on member clusters without storing its manifests in a Git repository:

  1. flux install --kubeconfig ~/.kube/members.config --context member1
  2. flux install --kubeconfig ~/.kube/members.config --context member2

Tips:

  1. If you want to manage Helm releases across your fleet of clusters, Flux must be installed on each cluster.

  2. If the Flux toolkit controllers are successfully installed, you should see the following Pods:

  1. $ kubectl get pod -n flux-system
  2. NAME READY STATUS RESTARTS AGE
  3. helm-controller-55896d6ccf-dlf8b 1/1 Running 0 15d
  4. kustomize-controller-76795877c9-mbrsk 1/1 Running 0 15d
  5. notification-controller-7ccfbfbb98-lpgjl 1/1 Running 0 15d
  6. source-controller-6b8d9cb5cc-7dbcb 1/1 Running 0 15d

Helm release propagation

If you want to propagate Helm releases for your apps to member clusters, you can refer to the guide below.

  1. Define a Flux HelmRepository and a HelmRelease manifest in the Karmada Control Plane. They will serve as resource templates.
  1. apiVersion: source.toolkit.fluxcd.io/v1beta2
  2. kind: HelmRepository
  3. metadata:
  4. name: podinfo
  5. spec:
  6. interval: 1m
  7. url: https://stefanprodan.github.io/podinfo
  8. ---
  9. apiVersion: helm.toolkit.fluxcd.io/v2beta1
  10. kind: HelmRelease
  11. metadata:
  12. name: podinfo
  13. spec:
  14. interval: 5m
  15. chart:
  16. spec:
  17. chart: podinfo
  18. version: 5.0.3
  19. sourceRef:
  20. kind: HelmRepository
  21. name: podinfo
  1. Define a Karmada PropagationPolicy that will propagate them to member clusters:
  1. apiVersion: policy.karmada.io/v1alpha1
  2. kind: PropagationPolicy
  3. metadata:
  4. name: helm-repo
  5. spec:
  6. resourceSelectors:
  7. - apiVersion: source.toolkit.fluxcd.io/v1beta2
  8. kind: HelmRepository
  9. name: podinfo
  10. placement:
  11. clusterAffinity:
  12. clusterNames:
  13. - member1
  14. - member2
  15. ---
  16. apiVersion: policy.karmada.io/v1alpha1
  17. kind: PropagationPolicy
  18. metadata:
  19. name: helm-release
  20. spec:
  21. resourceSelectors:
  22. - apiVersion: helm.toolkit.fluxcd.io/v2beta1
  23. kind: HelmRelease
  24. name: podinfo
  25. placement:
  26. clusterAffinity:
  27. clusterNames:
  28. - member1
  29. - member2

The above configuration is for propagating the Flux objects to member1 and member2 clusters.

  1. Apply those manifests to the Karmada-apiserver:
  1. kubectl apply -f ../helm/ --kubeconfig ~/.kube/karmada.config

The output is similar to:

  1. helmrelease.helm.toolkit.fluxcd.io/podinfo created
  2. helmrepository.source.toolkit.fluxcd.io/podinfo created
  3. propagationpolicy.policy.karmada.io/helm-release created
  4. propagationpolicy.policy.karmada.io/helm-repo created
  1. Switch to the distributed cluster and verify:
  1. helm --kubeconfig ~/.kube/members.config --kube-context member1 list

The output is similar to:

  1. NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
  2. podinfo default 1 2022-05-27 01:44:35.24229175 +0000 UTC deployed podinfo-5.0.3 5.0.3

Based on Karmada’s propagation policy, you can schedule Helm releases to your desired cluster flexibly, just like Kubernetes schedules Pods to the desired node.

Customize the Helm release for specific clusters

The example above shows how to propagate the same Helm release to multiple clusters in Karmada. Besides, you can use Karmada’s OverridePolicy to customize applications for specific clusters. For example, if you just want to change replicas in member1, you can refer to the overridePolicy below.

  1. Define a Karmada OverridePolicy:
  1. apiVersion: policy.karmada.io/v1alpha1
  2. kind: OverridePolicy
  3. metadata:
  4. name: example-override
  5. namespace: default
  6. spec:
  7. resourceSelectors:
  8. - apiVersion: helm.toolkit.fluxcd.io/v2beta1
  9. kind: HelmRelease
  10. name: podinfo
  11. overrideRules:
  12. - targetCluster:
  13. clusterNames:
  14. - member1
  15. overriders:
  16. plaintext:
  17. - path: "/spec/values"
  18. operator: add
  19. value:
  20. replicaCount: 2
  1. Apply the manifest to the Karmada-apiserver:
  1. kubectl apply -f example-override.yaml --kubeconfig ~/.kube/karmada.config

The output is similar to:

  1. overridepolicy.policy.karmada.io/example-override configured
  1. After applying the above policy in the Karmada Control Plane, you will find that replicas in member1 have changed to 2, but those in member2 keep the same.
  1. kubectl --kubeconfig ~/.kube/members.config --context member1 get po

The output is similar to:

  1. NAME READY STATUS RESTARTS AGE
  2. podinfo-68979685bc-6wz6s 1/1 Running 0 6m28s
  3. podinfo-68979685bc-dz9f6 1/1 Running 0 7m42s

Kustomize propagation

Kustomize propagation is basically the same as helm chart propagation above. You can refer to the guide below.

  1. Define a Flux GitRepository and a Kustomization manifest in the Karmada Control Plane:
  1. apiVersion: source.toolkit.fluxcd.io/v1beta2
  2. kind: GitRepository
  3. metadata:
  4. name: podinfo
  5. spec:
  6. interval: 1m
  7. url: https://github.com/stefanprodan/podinfo
  8. ref:
  9. branch: master
  10. ---
  11. apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
  12. kind: Kustomization
  13. metadata:
  14. name: podinfo-dev
  15. spec:
  16. interval: 5m
  17. path: "./deploy/overlays/dev/"
  18. prune: true
  19. sourceRef:
  20. kind: GitRepository
  21. name: podinfo
  22. validation: client
  23. timeout: 80s
  1. Define a Karmada PropagationPolicy that will propagate them to member clusters:
  1. apiVersion: policy.karmada.io/v1alpha1
  2. kind: PropagationPolicy
  3. metadata:
  4. name: kust-release
  5. spec:
  6. resourceSelectors:
  7. - apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
  8. kind: Kustomization
  9. name: podinfo-dev
  10. placement:
  11. clusterAffinity:
  12. clusterNames:
  13. - member1
  14. - member2
  15. ---
  16. apiVersion: policy.karmada.io/v1alpha1
  17. kind: PropagationPolicy
  18. metadata:
  19. name: kust-git
  20. spec:
  21. resourceSelectors:
  22. - apiVersion: source.toolkit.fluxcd.io/v1beta2
  23. kind: GitRepository
  24. name: podinfo
  25. placement:
  26. clusterAffinity:
  27. clusterNames:
  28. - member1
  29. - member2
  1. Apply those YAMLs to the karmada-apiserver:
  1. kubectl apply -f kust/ --kubeconfig ~/.kube/karmada.config

The output is similar to:

  1. gitrepository.source.toolkit.fluxcd.io/podinfo created
  2. kustomization.kustomize.toolkit.fluxcd.io/podinfo-dev created
  3. propagationpolicy.policy.karmada.io/kust-git created
  4. propagationpolicy.policy.karmada.io/kust-release created
  1. Switch to the distributed cluster and verify:
  1. kubectl --kubeconfig ~/.kube/members.config --context member1 get pod -n dev

The output is similar to:

  1. NAME READY STATUS RESTARTS AGE
  2. backend-69c7655cb-rbtrq 1/1 Running 0 15s
  3. cache-bdff5c8dc-mmnbm 1/1 Running 0 15s
  4. frontend-7f98bf6f85-dw4vq 1/1 Running 0 15s

Reference