Customize the scheduler

Karmada ships with a default scheduler that is described here. If the default scheduler does not suit your needs you can implement your own scheduler. Karmada’s Scheduler Framework is similar to Kubernetes, but unlike K8s, Karmada needs to deploy applications to a group of clusters instead of nodes. According to the placement field of the user’s scheduling policy and the internal scheduling plug-in algorithm, the user’s application will be deployed to the desired cluster group.

The scheduling process can be divided into the following four steps:

  • Predicate: filter inappropriate clusters

  • Priority: score the cluster

  • SelectClusters: select cluster groups based on cluster scores and SpreadConstraint

  • ReplicaScheduling: deploy the application replicas on the selected cluster group according to the configured replica scheduling policy

    schedule process

Among them, the plug-ins for filtering and scoring can be customized and configured based on the scheduler framework.

The default scheduler has several in-tree plugins:

  • APIEnablement: a plugin that checks if the API(CRD) of the resource is installed in the target cluster.
  • TaintToleration: a plugin that checks if a propagation policy tolerates a cluster’s taints.
  • ClusterAffinity: a plugin that checks if a resource selector matches the cluster label.
  • SpreadConstraint: a plugin that checks if spread property in the Cluster.Spec.
  • ClusterLocality: a score plugin that favors cluster that already have the resource.
  • ClusterEviction: a plugin that checks if the target cluster is in the GracefulEvictionTasks, which means it is in the process of eviction.

You can customize your out-of-tree plugins according to your own scenario, and implement your scheduler through Karmada’s Scheduler Framework. This document will give a detailed description of how to customize a Karmada scheduler.

Before you begin

You need to have a Karmada control plane. To start up Karmada, you can refer to here. If you just want to try Karmada, we recommend building a development environment by hack/local-up-karmada.sh.

  1. git clone https://github.com/karmada-io/karmada
  2. cd karmada
  3. hack/local-up-karmada.sh

Deploy a plugin

Assume you want to deploy a new filter plugin named TestFilter. You can refer to the karmada-scheduler implementation in pkg/scheduler/framework/plugins in the Karmada source directory. The code directory after development is similar to:

  1. .
  2. ├── apienablement
  3. ├── clusteraffinity
  4. ├── clustereviction
  5. ├── clusterlocality
  6. ├── spreadconstraint
  7. ├── tainttoleration
  8. ├── testfilter
  9. ├── test_filter.go

The content of the test_filter.go file is as follows, and the specific filtering logic implementation is hidden.

  1. package testfilter
  2. import (
  3. "context"
  4. clusterv1alpha1 "github.com/karmada-io/karmada/pkg/apis/cluster/v1alpha1"
  5. policyv1alpha1 "github.com/karmada-io/karmada/pkg/apis/policy/v1alpha1"
  6. workv1alpha2 "github.com/karmada-io/karmada/pkg/apis/work/v1alpha2"
  7. "github.com/karmada-io/karmada/pkg/scheduler/framework"
  8. )
  9. const (
  10. // Name is the name of the plugin used in the plugin registry and configurations.
  11. Name = "TestFilter"
  12. )
  13. type TestFilter struct{}
  14. var _ framework.FilterPlugin = &TestFilter{}
  15. // New instantiates the TestFilter plugin.
  16. func New() (framework.Plugin, error) {
  17. return &TestFilter{}, nil
  18. }
  19. // Name returns the plugin name.
  20. func (p *TestFilter) Name() string {
  21. return Name
  22. }
  23. // Filter implements the filtering logic of the TestFilter plugin.
  24. func (p *TestFilter) Filter(ctx context.Context,
  25. bindingSpec *workv1alpha2.ResourceBindingSpec, bindingStatus *workv1alpha2.ResourceBindingStatus, cluster *clusterv1alpha1.Cluster) *framework.Result {
  26. // implementation
  27. return framework.NewResult(framework.Success)
  28. }

For a filter plugin, you must implement framework.FilterPlugin interface. And for a score plugin, you must implement framework.ScorePlugin interface.

Register the plugin

Edit the cmd/scheduler/main.go:

  1. package main
  2. import (
  3. "os"
  4. "k8s.io/component-base/cli"
  5. _ "k8s.io/component-base/logs/json/register" // for JSON log format registration
  6. controllerruntime "sigs.k8s.io/controller-runtime"
  7. _ "sigs.k8s.io/controller-runtime/pkg/metrics"
  8. "github.com/karmada-io/karmada/cmd/scheduler/app"
  9. "github.com/karmada-io/karmada/pkg/scheduler/framework/plugins/testfilter"
  10. )
  11. func main() {
  12. stopChan := controllerruntime.SetupSignalHandler().Done()
  13. command := app.NewSchedulerCommand(stopChan, app.WithPlugin(testfilter.Name, testfilter.New))
  14. code := cli.Run(command)
  15. os.Exit(code)
  16. }

To register the plugin, you need to pass in the plugin configuration in the NewSchedulerCommand function.

Package the scheduler

After you register the plugin, you need to package your scheduler binary into a container image.

  1. cd karmada
  2. export VERSION=## Your Image Tag
  3. make image-karmada-scheduler
  1. $ kubectl --kubeconfig ~/.kube/karmada.config --context karmada-host edit deploy/karmada-scheduler -nkarmada-system
  2. ...
  3. spec:
  4. automountServiceAccountToken: false
  5. containers:
  6. - command:
  7. - /bin/karmada-scheduler
  8. - --kubeconfig=/etc/kubeconfig
  9. - --bind-address=0.0.0.0
  10. - --secure-port=10351
  11. - --enable-scheduler-estimator=true
  12. - --v=4
  13. image: ## Your Image Address
  14. ...

When you start the scheduler, you can find that TestFilter plugin has been enabled from the logs:

  1. I0408 12:57:14.563522 1 scheduler.go:141] karmada-scheduler version: version.Info{GitVersion:"v1.9.0-preview5", GitCommit:"0126b90fc89d2f5509842ff8dc7e604e84288b96", GitTreeState:"clean", BuildDate:"2024-01-29T13:29:49Z", GoVersion:"go1.20.11", Compiler:"gc", Platform:"linux/amd64"}
  2. I0408 12:57:14.564979 1 registry.go:79] Enable Scheduler plugin "ClusterAffinity"
  3. I0408 12:57:14.564991 1 registry.go:79] Enable Scheduler plugin "SpreadConstraint"
  4. I0408 12:57:14.564996 1 registry.go:79] Enable Scheduler plugin "ClusterLocality"
  5. I0408 12:57:14.564999 1 registry.go:79] Enable Scheduler plugin "ClusterEviction"
  6. I0408 12:57:14.565002 1 registry.go:79] Enable Scheduler plugin "APIEnablement"
  7. I0408 12:57:14.565005 1 registry.go:79] Enable Scheduler plugin "TaintToleration"
  8. I0408 12:57:14.565008 1 registry.go:79] Enable Scheduler plugin "TestFilter"

Config the plugin

You can config the plugin enablement by setting the flag --plugins. For example, the following config will disable TestFilter plugin.

  1. $ kubectl --kubeconfig ~/.kube/karmada.config --context karmada-host edit deploy/karmada-scheduler -nkarmada-system
  2. ...
  3. spec:
  4. automountServiceAccountToken: false
  5. containers:
  6. - command:
  7. - /bin/karmada-scheduler
  8. - --kubeconfig=/etc/kubeconfig
  9. - --bind-address=0.0.0.0
  10. - --secure-port=10351
  11. - --enable-scheduler-estimator=true
  12. - --plugins=*,-TestFilter
  13. - --v=4
  14. image: ## Your Image Address
  15. ...

Configure Multiple Schedulers

Run the second scheduler in the cluster

You can run multiple schedulers simultaneously alongside the default scheduler and instruct Karmada what scheduler to use for each of your workloads. Here is an sample of the deployment config. You can save it as my-scheduler.yaml:

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: my-karmada-scheduler
  5. namespace: karmada-system
  6. labels:
  7. app: my-karmada-scheduler
  8. spec:
  9. replicas: 1
  10. selector:
  11. matchLabels:
  12. app: my-karmada-scheduler
  13. template:
  14. metadata:
  15. labels:
  16. app: my-karmada-scheduler
  17. spec:
  18. automountServiceAccountToken: false
  19. tolerations:
  20. - key: node-role.kubernetes.io/master
  21. operator: Exists
  22. containers:
  23. - name: karmada-scheduler
  24. image: docker.io/karmada/karmada-scheduler:latest
  25. imagePullPolicy: IfNotPresent
  26. livenessProbe:
  27. httpGet:
  28. path: /healthz
  29. port: 10351
  30. scheme: HTTP
  31. failureThreshold: 3
  32. initialDelaySeconds: 15
  33. periodSeconds: 15
  34. timeoutSeconds: 5
  35. command:
  36. - /bin/karmada-scheduler
  37. - --kubeconfig=/etc/kubeconfig
  38. - --bind-address=0.0.0.0
  39. - --secure-port=10351
  40. - --enable-scheduler-estimator=true
  41. - --leader-elect-resource-name=my-scheduler # Your custom scheduler name
  42. - --scheduler-name=my-scheduler # Your custom scheduler name
  43. - --v=4
  44. volumeMounts:
  45. - name: kubeconfig
  46. subPath: kubeconfig
  47. mountPath: /etc/kubeconfig
  48. volumes:
  49. - name: kubeconfig
  50. secret:
  51. secretName: kubeconfig

Note: For the --leader-elect-resource-name option, it will be karmada-scheduler by default. If you deploy another scheduler along with the default scheduler, this option should be specified and it’s recommended to use the scheduler name as the value.

In order to run your scheduler in Karmada, create the deployment specified in the config above:

  1. kubectl --context karmada-host create -f my-scheduler.yaml

Verify that the scheduler pod is running:

  1. kubectl --context karmada-host get pods --namespace=karmada-system
  1. NAME READY STATUS RESTARTS AGE
  2. ....
  3. my-karmada-scheduler-lnf4s-4744f 1/1 Running 0 2m
  4. ...

You should see a “Running” my-karmada-scheduler pod, in addition to the default karmada-scheduler pod in this list.

Specify schedulers for deployments

Now that your second scheduler is running, create some deployments, and direct them to be scheduled by either the default scheduler or the one you deployed. In order to schedule a given deployment using a specific scheduler, specify the name of the scheduler in that propagationPolicy spec. Let’s look at three examples.

  • PropagationPolicy spec without any scheduler name
  1. apiVersion: policy.karmada.io/v1alpha1
  2. kind: PropagationPolicy
  3. metadata:
  4. name: nginx-propagation
  5. spec:
  6. resourceSelectors:
  7. - apiVersion: apps/v1
  8. kind: Deployment
  9. name: nginx
  10. placement:
  11. clusterAffinity:
  12. clusterNames:
  13. - member1
  14. - member2

When no scheduler name is supplied, the deployment is automatically scheduled using the default-scheduler.

  • PropagationPolicy spec with default-scheduler
  1. apiVersion: policy.karmada.io/v1alpha1
  2. kind: PropagationPolicy
  3. metadata:
  4. name: nginx-propagation
  5. spec:
  6. schedulerName: default-scheduler
  7. resourceSelectors:
  8. - apiVersion: apps/v1
  9. kind: Deployment
  10. name: nginx
  11. placement:
  12. clusterAffinity:
  13. clusterNames:
  14. - member1
  15. - member2

A scheduler is specified by supplying the scheduler name as a value to spec.schedulerName. In this case, we supply the name of the default scheduler which is default-scheduler.

  • PropagationPolicy spec with my-scheduler
  1. apiVersion: policy.karmada.io/v1alpha1
  2. kind: PropagationPolicy
  3. metadata:
  4. name: nginx-propagation
  5. spec:
  6. schedulerName: my-scheduler
  7. resourceSelectors:
  8. - apiVersion: apps/v1
  9. kind: Deployment
  10. name: nginx
  11. placement:
  12. clusterAffinity:
  13. clusterNames:
  14. - member1
  15. - member2

In this case, we specify that this deployment should be scheduled using the scheduler that we deployed - my-scheduler. Note that the value of spec.schedulerName should match the name supplied for the scheduler in the schedulerName field of options in the second schedulers.

Verifying that the deployments were scheduled using the desired schedulers

In order to make it easier to work through these examples, you can look at the “Scheduled” entries in the event logs to verify that the deployments were scheduled by the desired schedulers.

  1. kubectl --context karmada-apiserver describe deploy/nginx