Namespaces Walkthrough

Kubernetes namespaces help different projects, teams, or customers to share a Kubernetes cluster.

It does this by providing the following:

  1. A scope for Names.
  2. A mechanism to attach authorization and policy to a subsection of the cluster.

Use of multiple namespaces is optional.

This example demonstrates how to use Kubernetes namespaces to subdivide your cluster.

Before you begin

You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:

To check the version, enter kubectl version.

Prerequisites

This example assumes the following:

  1. You have an existing Kubernetes cluster.
  2. You have a basic understanding of Kubernetes Pods, Services, and Deployments.

Understand the default namespace

By default, a Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of Pods, Services, and Deployments used by the cluster.

Assuming you have a fresh cluster, you can inspect the available namespaces by doing the following:

  1. kubectl get namespaces
  1. NAME STATUS AGE
  2. default Active 13m

Create new namespaces

For this exercise, we will create two additional Kubernetes namespaces to hold our content.

Let’s imagine a scenario where an organization is using a shared Kubernetes cluster for development and production use cases.

The development team would like to maintain a space in the cluster where they can get a view on the list of Pods, Services, and Deployments they use to build and run their application. In this space, Kubernetes resources come and go, and the restrictions on who can or cannot modify resources are relaxed to enable agile development.

The operations team would like to maintain a space in the cluster where they can enforce strict procedures on who can or cannot manipulate the set of Pods, Services, and Deployments that run the production site.

One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: development and production.

Let’s create two new namespaces to hold our work.

Use the file namespace-dev.yaml which describes a development namespace:

  1. admin/namespace-dev.yaml
  1. apiVersion: v1
  2. kind: Namespace
  3. metadata:
  4. name: development
  5. labels:
  6. name: development

Create the development namespace using kubectl.

  1. kubectl create -f https://k8s.io/examples/admin/namespace-dev.yaml

Save the following contents into file namespace-prod.yaml which describes a production namespace:

  1. admin/namespace-prod.yaml
  1. apiVersion: v1
  2. kind: Namespace
  3. metadata:
  4. name: production
  5. labels:
  6. name: production

And then let’s create the production namespace using kubectl.

  1. kubectl create -f https://k8s.io/examples/admin/namespace-prod.yaml

To be sure things are right, let’s list all of the namespaces in our cluster.

  1. kubectl get namespaces --show-labels
  1. NAME STATUS AGE LABELS
  2. default Active 32m <none>
  3. development Active 29s name=development
  4. production Active 23s name=production

Create pods in each namespace

A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster.

Users interacting with one namespace do not see the content in another namespace.

To demonstrate this, let’s spin up a simple Deployment and Pods in the development namespace.

We first check what is the current context:

  1. kubectl config view
  1. apiVersion: v1
  2. clusters:
  3. - cluster:
  4. certificate-authority-data: REDACTED
  5. server: https://130.211.122.180
  6. name: lithe-cocoa-92103_kubernetes
  7. contexts:
  8. - context:
  9. cluster: lithe-cocoa-92103_kubernetes
  10. user: lithe-cocoa-92103_kubernetes
  11. name: lithe-cocoa-92103_kubernetes
  12. current-context: lithe-cocoa-92103_kubernetes
  13. kind: Config
  14. preferences: {}
  15. users:
  16. - name: lithe-cocoa-92103_kubernetes
  17. user:
  18. client-certificate-data: REDACTED
  19. client-key-data: REDACTED
  20. token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b
  21. - name: lithe-cocoa-92103_kubernetes-basic-auth
  22. user:
  23. password: h5M0FtUUIflBSdI7
  24. username: admin
  1. kubectl config current-context
  1. lithe-cocoa-92103_kubernetes

The next step is to define a context for the kubectl client to work in each namespace. The value of “cluster” and “user” fields are copied from the current context.

  1. kubectl config set-context dev --namespace=development \
  2. --cluster=lithe-cocoa-92103_kubernetes \
  3. --user=lithe-cocoa-92103_kubernetes
  4. kubectl config set-context prod --namespace=production \
  5. --cluster=lithe-cocoa-92103_kubernetes \
  6. --user=lithe-cocoa-92103_kubernetes

By default, the above commands add two contexts that are saved into file .kube/config. You can now view the contexts and alternate against the two new request contexts depending on which namespace you wish to work against.

To view the new contexts:

  1. kubectl config view
  1. apiVersion: v1
  2. clusters:
  3. - cluster:
  4. certificate-authority-data: REDACTED
  5. server: https://130.211.122.180
  6. name: lithe-cocoa-92103_kubernetes
  7. contexts:
  8. - context:
  9. cluster: lithe-cocoa-92103_kubernetes
  10. user: lithe-cocoa-92103_kubernetes
  11. name: lithe-cocoa-92103_kubernetes
  12. - context:
  13. cluster: lithe-cocoa-92103_kubernetes
  14. namespace: development
  15. user: lithe-cocoa-92103_kubernetes
  16. name: dev
  17. - context:
  18. cluster: lithe-cocoa-92103_kubernetes
  19. namespace: production
  20. user: lithe-cocoa-92103_kubernetes
  21. name: prod
  22. current-context: lithe-cocoa-92103_kubernetes
  23. kind: Config
  24. preferences: {}
  25. users:
  26. - name: lithe-cocoa-92103_kubernetes
  27. user:
  28. client-certificate-data: REDACTED
  29. client-key-data: REDACTED
  30. token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b
  31. - name: lithe-cocoa-92103_kubernetes-basic-auth
  32. user:
  33. password: h5M0FtUUIflBSdI7
  34. username: admin

Let’s switch to operate in the development namespace.

  1. kubectl config use-context dev

You can verify your current context by doing the following:

  1. kubectl config current-context
  1. dev

At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace.

Let’s create some contents.

  1. admin/snowflake-deployment.yaml
  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. labels:
  5. app: snowflake
  6. name: snowflake
  7. spec:
  8. replicas: 2
  9. selector:
  10. matchLabels:
  11. app: snowflake
  12. template:
  13. metadata:
  14. labels:
  15. app: snowflake
  16. spec:
  17. containers:
  18. - image: registry.k8s.io/serve_hostname
  19. imagePullPolicy: Always
  20. name: snowflake

Apply the manifest to create a Deployment

  1. kubectl apply -f https://k8s.io/examples/admin/snowflake-deployment.yaml

We have created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that serves the hostname.

  1. kubectl get deployment
  1. NAME READY UP-TO-DATE AVAILABLE AGE
  2. snowflake 2/2 2 2 2m
  1. kubectl get pods -l app=snowflake
  1. NAME READY STATUS RESTARTS AGE
  2. snowflake-3968820950-9dgr8 1/1 Running 0 2m
  3. snowflake-3968820950-vgc4n 1/1 Running 0 2m

And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.

Let’s switch to the production namespace and show how resources in one namespace are hidden from the other.

  1. kubectl config use-context prod

The production namespace should be empty, and the following commands should return nothing.

  1. kubectl get deployment
  2. kubectl get pods

Production likes to run cattle, so let’s create some cattle pods.

  1. kubectl create deployment cattle --image=registry.k8s.io/serve_hostname --replicas=5
  2. kubectl get deployment
  1. NAME READY UP-TO-DATE AVAILABLE AGE
  2. cattle 5/5 5 5 10s
  1. kubectl get pods -l app=cattle
  1. NAME READY STATUS RESTARTS AGE
  2. cattle-2263376956-41xy6 1/1 Running 0 34s
  3. cattle-2263376956-kw466 1/1 Running 0 34s
  4. cattle-2263376956-n4v97 1/1 Running 0 34s
  5. cattle-2263376956-p5p3i 1/1 Running 0 34s
  6. cattle-2263376956-sxpth 1/1 Running 0 34s

At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.

As the policy support in Kubernetes evolves, we will extend this scenario to show how you can provide different authorization rules for each namespace.