Setup Multi User

Fleet uses Kubernetes RBAC where possible.

One addition on top of RBAC is the GitRepoRestriction resource, which can be used to control GitRepo resources in a namespace.

A multi-user fleet setup looks like this:

  • tenants don’t share namespaces, each tenant has one or more namespaces on the upstream cluster, where they can create GitRepo resources
  • tenants can’t deploy cluster wide resources and are limited to a set of namespaces on downstream clusters
  • clusters are in a separate namespace

Shared Clusters

Setup Multi User - 图2important information

The isolation of tenants is not complete and relies on Kubernetes RBAC to be set up correctly. Without manual setup from an operator tenants can still deploy cluster wide resources. Even with the available Fleet restrictions, users are only restricted to namespaces, but namespaces don’t provide much isolation on their own. E.g. they can still consume as many resources as they like.

However, the existing Fleet restrictions allow users to share clusters, and deploy resources without conflicts.

Example Fleet Standalone

This would create a user ‘fleetuser’, who can only manage GitRepo resources in the ‘project1’ namespace.

  1. kubectl create serviceaccount fleetuser
  2. kubectl create namespace project1
  3. kubectl create -n project1 role fleetuser --verb=get --verb=list --verb=create --verb=delete --resource=gitrepos.fleet.cattle.io
  4. kubectl create -n project1 rolebinding fleetuser --serviceaccount=default:fleetuser --role=fleetuser

If we want to give access to multiple namespaces, we can use a single cluster role with two role bindings:

  1. kubectl create clusterrole fleetuser --verb=get --verb=list --verb=create --verb=delete --resource=gitrepos.fleet.cattle.io
  2. kubectl create -n project1 rolebinding fleetuser --serviceaccount=default:fleetuser --clusterrole=fleetuser
  3. kubectl create -n project2 rolebinding fleetuser --serviceaccount=default:fleetuser --clusterrole=fleetuser

This makes sure, tenants can’t interfere with GitRepo resources from other tenants, since they don’t have access to their namespaces.

Example Fleet in Rancher

When a new fleet workspace is created, a corresponding namespace with an identical name is automatically generated within the Rancher local cluster. For a user to see and deploy fleet resources in a specific workspace, they need at least the following permissions:

  • list/get the fleetworkspace cluster-wide resource in the local cluster
  • Permissions to create fleet resources (such as bundles, gitrepos, …) in the backing namespace for the workspace in the local cluster.

Let’s grant permissions to deploy fleet resources in the project1 and project2 fleet workspaces:

  • To create the project1 and project2 fleet workspaces, you can either do it in the Rancher UI or use the following YAML resources:
  1. apiVersion: management.cattle.io/v3
  2. kind: FleetWorkspace
  3. metadata:
  4. name: project1
  1. apiVersion: management.cattle.io/v3
  2. kind: FleetWorkspace
  3. metadata:
  4. name: project2
  • Create a GlobalRole that grants permission to deploy fleet resources in the project1 and project2 fleet workspaces:
  1. apiVersion: management.cattle.io/v3
  2. kind: GlobalRole
  3. metadata:
  4. name: fleet-projects1and2
  5. namespacedRules:
  6. project1:
  7. - apiGroups:
  8. - fleet.cattle.io
  9. resources:
  10. - gitrepos
  11. - bundles
  12. - clusterregistrationtokens
  13. - gitreporestrictions
  14. - clusters
  15. - clustergroups
  16. verbs:
  17. - '*'
  18. project2:
  19. - apiGroups:
  20. - fleet.cattle.io
  21. resources:
  22. - gitrepos
  23. - bundles
  24. - clusterregistrationtokens
  25. - gitreporestrictions
  26. - clusters
  27. - clustergroups
  28. verbs:
  29. - '*'
  30. rules:
  31. - apiGroups:
  32. - management.cattle.io
  33. resourceNames:
  34. - project1
  35. - project2
  36. resources:
  37. - fleetworkspaces
  38. verbs:
  39. - '*'

Assign the GlobalRole to users or groups, more info can be found in the Rancher docs

The user now has access to the Continuous Delivery tab in Rancher and can deploy resources to both the project1 and project2 workspaces.

Allow Access to Clusters

This assumes all GitRepos created by ‘fleetuser’ have the team: one label. Different labels could be used, to select different cluster namespaces.

In each of the user’s namespaces, as an admin create a BundleNamespaceMapping.

  1. kind: BundleNamespaceMapping
  2. apiVersion: fleet.cattle.io/v1alpha1
  3. metadata:
  4. name: mapping
  5. namespace: project1
  6. # Bundles to match by label.
  7. # The labels are defined in the fleet.yaml # labels field or from the
  8. # GitRepo metadata.labels field
  9. bundleSelector:
  10. matchLabels:
  11. team: one
  12. # or target one repo
  13. #fleet.cattle.io/repo-name: simpleapp
  14. # Namespaces, containing clusters, to match by label
  15. namespaceSelector:
  16. matchLabels:
  17. kubernetes.io/metadata.name: fleet-default
  18. # the label is on the namespace
  19. #workspace: prod

The target section in the GitRepo resource can be used to deploy only to a subset of the matched clusters.

Restricting Access to Downstream Clusters

Admins can further restrict tenants by creating a GitRepoRestriction in each of their namespaces.

  1. kind: GitRepoRestriction
  2. apiVersion: fleet.cattle.io/v1alpha1
  3. metadata:
  4. name: restriction
  5. namespace: project1
  6. allowedTargetNamespaces:
  7. - project1simpleapp

This will deny the creation of cluster wide resources, which may interfere with other tenants and limit the deployment to the ‘project1simpleapp’ namespace.

An Example GitRepo Resource

A GitRepo resource created by a tenant, without admin access could look like this:

  1. kind: GitRepo
  2. apiVersion: fleet.cattle.io/v1alpha1
  3. metadata:
  4. name: simpleapp
  5. namespace: project1
  6. labels:
  7. team: one
  8. spec:
  9. repo: https://github.com/rancher/fleet-examples
  10. paths:
  11. - bundle-diffs
  12. targetNamespace: project1simpleapp
  13. # do not match the upstream/local cluster, won't work
  14. targets:
  15. - name: dev
  16. clusterSelector:
  17. matchLabels:
  18. env: dev

This includes the team: one label and and the required targetNamespace.

Together with the previous BundleNamespaceMapping it would target all clusters with a env: dev label in the ‘fleet-default’ namespace.

Setup Multi User - 图3note

BundleNamespaceMappings do not work with local clusters, so make sure not to target them.