Federated ResourceQuota

Background

With the widespread use of multi-clusters, the administrator may deploy services on multiple clusters and resource management of services under multiple clusters has become a new challenge. A traditional approach is that the administrator manually deploys a namespace and ResourceQuota under each Kubernetes cluster, where Kubernetes will limit the resources according to the ResourceQuotas. This approach is a bit inconvenient and not flexible enough. In addition, this practice is now challenged by the differences in service scale, available resources and resource types of each cluster.

Resource administrators often need to manage and control the consumption of resources by each service in a global view. Here’s where the FederatedResourceQuota API comes in. The following is several typical usage scenarios of FederatedResourceQuota.

What FederatedResourceQuota can do

FederatedResourceQuota supports:

  • Global quota management for applications that run on multiple clusters.
  • Fine-grained management of quotas under the same namespace of different clusters.
    • Ability to enumerate resource usage limits per namespace.
    • Ability to monitor resource usage for tracked resources.
    • Ability to reject resource usage exceeding hard quotas.

unified resourcequota

You can use FederatedResourceQuota to manage CPU, memory, storage and ephemeral-storage.

Deploy a simplest FederatedResourceQuota

Assume you, an administrator, want to deploy service A across multiple clusters in namespace test.

You can create a namespace called test on the Karmada control plane. Karmada will automatically create the corresponding namespace in the member clusters.

  1. kubectl --kubeconfig ~/.kube/karmada.config --context karmada-apiserver create ns test

You want to create a CPU limit of 100 cores for service A. The available resources on clusters:

  • member1: 20C
  • member2: 50C
  • member3: 100C

In this example, allocate 20C of member1, 50C of member2, and 30C of member3 to service A. Resources are reserved for more important services in member3.

You can deploy a FederatedResourceQuota as follows.

  1. apiVersion: policy.karmada.io/v1alpha1
  2. kind: FederatedResourceQuota
  3. metadata:
  4. name: test
  5. namespace: test
  6. spec:
  7. overall:
  8. cpu: 100
  9. staticAssignments:
  10. - clusterName: member1
  11. hard:
  12. cpu: 20
  13. - clusterName: member2
  14. hard:
  15. cpu: 50
  16. - clusterName: member3
  17. hard:
  18. cpu: 30

Verify the status of FederatedResourceQuota:

  1. kubectl --kubeconfig ~/.kube/karmada.config --context karmada-apiserver get federatedresourcequotas/test -ntest -oyaml

The output is similar to:

  1. spec:
  2. overall:
  3. cpu: 100
  4. staticAssignments:
  5. - clusterName: member1
  6. hard:
  7. cpu: 20
  8. - clusterName: member2
  9. hard:
  10. cpu: 50
  11. - clusterName: member3
  12. hard:
  13. cpu: 30
  14. status:
  15. aggregatedStatus:
  16. - clusterName: member1
  17. hard:
  18. cpu: "20"
  19. used:
  20. cpu: "0"
  21. - clusterName: member2
  22. hard:
  23. cpu: "50"
  24. used:
  25. cpu: "0"
  26. - clusterName: member3
  27. hard:
  28. cpu: "30"
  29. used:
  30. cpu: "0"
  31. overall:
  32. cpu: "100"
  33. overallUsed:
  34. cpu: "0"

For quick test, you can deploy a simple application which consumes 1C CPU to member1.

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: nginx
  5. namespace: test
  6. labels:
  7. app: nginx
  8. spec:
  9. replicas: 1
  10. selector:
  11. matchLabels:
  12. app: nginx
  13. template:
  14. metadata:
  15. labels:
  16. app: nginx
  17. spec:
  18. containers:
  19. - image: nginx
  20. name: nginx
  21. resources:
  22. requests:
  23. cpu: 1

Verify the status of FederatedResourceQuota and you will find that FederatedResourceQuota can monitor resource usage correctly for tracked resources.

  1. kubectl --kubeconfig ~/.kube/karmada.config --context karmada-apiserver get federatedresourcequotas/test -ntest -oyaml
  1. spec:
  2. overall:
  3. cpu: 100
  4. staticAssignments:
  5. - clusterName: member1
  6. hard:
  7. cpu: 20
  8. - clusterName: member2
  9. hard:
  10. cpu: 50
  11. - clusterName: member3
  12. hard:
  13. cpu: 30
  14. status:
  15. aggregatedStatus:
  16. - clusterName: member1
  17. hard:
  18. cpu: "20"
  19. used:
  20. cpu: "1"
  21. - clusterName: member2
  22. hard:
  23. cpu: "50"
  24. used:
  25. cpu: "0"
  26. - clusterName: member3
  27. hard:
  28. cpu: "30"
  29. used:
  30. cpu: "0"
  31. overall:
  32. cpu: "100"
  33. overallUsed:
  34. cpu: "1"

Federated ResourceQuota - 图2备注

FederatedResourceQuota is still a work in progress. We are in the progress of gathering use cases. If you are interested in this feature, please feel free to start an enhancement issue to let us know.