Using Kubernetes constructs in policy
This section covers Kubernetes specific network policy aspects.
Namespaces
Namespaces are used to create virtual clusters within a Kubernetes cluster. All Kubernetes objects including NetworkPolicy and CiliumNetworkPolicy belong to a particular namespace. Depending on how a policy is being defined and created, Kubernetes namespaces are automatically being taken into account:
- Network policies created and imported as CiliumNetworkPolicy CRD and NetworkPolicy apply within the namespace, i.e. the policy only applies to pods within that namespace. It is however possible to grant access to and from pods in other namespaces as described below.
- Network policies imported directly via the API Reference apply to all namespaces unless a namespace selector is specified as described below.
Note
While specification of the namespace via the label k8s:io.kubernetes.pod.namespace
in the fromEndpoints
and toEndpoints
fields is deliberately supported. Specification of the namespace in the endpointSelector
is prohibited as it would violate the namespace isolation principle of Kubernetes. The endpointSelector
always applies to pods of the namespace which is associated with the CiliumNetworkPolicy resource itself.
Example: Enforce namespace boundaries
This example demonstrates how to enforce Kubernetes namespace-based boundaries for the namespaces ns1
and ns2
by enabling default-deny on all pods of either namespace and then allowing communication from all pods within the same namespace.
Note
The example locks down ingress of the pods in ns1
and ns2
. This means that the pods can still communicate egress to anywhere unless the destination is in either ns1
or ns2
in which case both source and destination have to be in the same namespace. In order to enforce namespace boundaries at egress, the same example can be used by specifying the rules at egress in addition to ingress.
k8s YAML
JSON
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "isolate-ns1"
namespace: ns1
spec:
endpointSelector:
matchLabels:
{}
ingress:
- fromEndpoints:
- matchLabels:
{}
---
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "isolate-ns1"
namespace: ns2
spec:
endpointSelector:
matchLabels:
{}
ingress:
- fromEndpoints:
- matchLabels:
{}
[
{
"ingress" : [
{
"fromEndpoints" : [
{
"matchLabels" : {
"k8s:io.kubernetes.pod.namespace" : "ns1"
}
}
]
}
],
"endpointSelector" : {
"matchLabels" : {
"k8s:io.kubernetes.pod.namespace" : "ns1"
}
}
},
{
"endpointSelector" : {
"matchLabels" : {
"k8s:io.kubernetes.pod.namespace" : "ns2"
}
},
"ingress" : [
{
"fromEndpoints" : [
{
"matchLabels" : {
"k8s:io.kubernetes.pod.namespace" : "ns2"
}
}
]
}
]
}
]
Example: Expose pods across namespaces
The following example exposes all pods with the label name=leia
in the namespace ns1
to all pods with the label name=luke
in the namespace ns2
.
Refer to the example YAML files for a fully functional example including pods deployed to different namespaces.
k8s YAML
JSON
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "k8s-expose-across-namespace"
namespace: ns1
spec:
endpointSelector:
matchLabels:
name: leia
ingress:
- fromEndpoints:
- matchLabels:
k8s:io.kubernetes.pod.namespace: ns2
name: luke
[{
"labels": [{"key": "name", "value": "k8s-svc-account"}],
"endpointSelector": {
"matchLabels": {"name":"leia", "k8s:io.kubernetes.pod.namespace":"ns1"}
},
"ingress": [{
"fromEndpoints": [{
"matchLabels":{"name": "luke", "k8s:io.kubernetes.pod.namespace":"ns2"}
}]
}]
}]
Example: Allow egress to kube-dns in kube-system namespace
The following example allows all pods in the public
namespace in which the policy is created to communicate with kube-dns on port 53/UDP in the kube-system
namespace.
k8s YAML
JSON
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "allow-to-kubedns"
namespace: public
spec:
endpointSelector:
{}
egress:
- toEndpoints:
- matchLabels:
k8s:io.kubernetes.pod.namespace: kube-system
k8s-app: kube-dns
toPorts:
- ports:
- port: '53'
protocol: UDP
[
{
"endpointSelector" : {
"matchLabels": {
"k8s:io.kubernetes.pod.namespace": "public"
}
},
"egress" : [
{
"toEndpoints" : [
{
"matchLabels" : {
"k8s:io.kubernetes.pod.namespace" : "kube-system",
"k8s-app" : "kube-dns"
}
}
],
"toPorts" : [
{
"ports" : [
{
"port" : "53",
"protocol" : "UDP"
}
]
}
]
}
]
}
]
ServiceAccounts
Kubernetes Service Accounts are used to associate an identity to a pod or process managed by Kubernetes and grant identities access to Kubernetes resources and secrets. Cilium supports the specification of network security policies based on the service account identity of a pod.
The service account of a pod is either defined via the service account admission controller or can be directly specified in the Pod, Deployment, ReplicationController resource like this:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: leia
...
Example
The following example grants any pod running under the service account of “luke” to issue a HTTP GET /public
request on TCP port 80 to all pods running associated to the service account of “leia”.
Refer to the example YAML files for a fully functional example including deployment and service account resources.
k8s YAML
JSON
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "k8s-svc-account"
spec:
endpointSelector:
matchLabels:
io.cilium.k8s.policy.serviceaccount: leia
ingress:
- fromEndpoints:
- matchLabels:
io.cilium.k8s.policy.serviceaccount: luke
toPorts:
- ports:
- port: '80'
protocol: TCP
rules:
http:
- method: GET
path: "/public$"
[{
"labels": [{"key": "name", "value": "k8s-svc-account"}],
"endpointSelector": {"matchLabels": {"io.cilium.k8s.policy.serviceaccount":"leia"}},
"ingress": [{
"fromEndpoints": [
{"matchLabels":{"io.cilium.k8s.policy.serviceaccount":"luke"}}
],
"toPorts": [{
"ports": [
{"port": "80", "protocol": "TCP"}
],
"rules": {
"http": [
{
"method": "GET",
"path": "/public$"
}
]
}
}]
}]
}]
Multi-Cluster
When operating multiple cluster with cluster mesh, the cluster name is exposed via the label io.cilium.k8s.policy.cluster
and can be used to restrict policies to a particular cluster.
k8s YAML
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "allow-cross-cluster"
description: "Allow x-wing in cluster1 to contact rebel-base in cluster2"
spec:
endpointSelector:
matchLabels:
name: x-wing
io.cilium.k8s.policy.cluster: cluster1
egress:
- toEndpoints:
- matchLabels:
name: rebel-base
io.cilium.k8s.policy.cluster: cluster2
Clusterwide Policies
CiliumNetworkPolicy only allows to bind a policy restricted to a particular namespace. There can be situations where one wants to have a cluster-scoped effect of the policy, which can be done using Cilium’s CiliumClusterwideNetworkPolicy Kubernetes custom resource. The specification of the policy is same as that of CiliumNetworkPolicy except that it is not namespaced.
In the cluster, this policy will allow ingress traffic from pods matching the label name=luke
from any namespace to pods matching the labels name=leia
in any namespace.
k8s YAML
apiVersion: "cilium.io/v2"
kind: CiliumClusterwideNetworkPolicy
metadata:
name: "clusterwide-policy-example"
spec:
description: "Policy for selective ingress allow to a pod from only a pod with given label"
endpointSelector:
matchLabels:
name: leia
ingress:
- fromEndpoints:
- matchLabels:
name: luke
Example: Allow all ingress to kube-dns
The following example allows all Cilium managed endpoints in the cluster to communicate with kube-dns on port 53/UDP in the kube-system
namespace.
k8s YAML
apiVersion: "cilium.io/v2"
kind: CiliumClusterwideNetworkPolicy
metadata:
name: "wildcard-from-endpoints"
spec:
description: "Policy for ingress allow to kube-dns from all Cilium managed endpoints in the cluster"
endpointSelector:
matchLabels:
k8s:io.kubernetes.pod.namespace: kube-system
k8s-app: kube-dns
ingress:
- fromEndpoints:
- {}
toPorts:
- ports:
- port: "53"
protocol: UDP