Locking down external access with DNS-based policies
This document serves as an introduction for using Cilium to enforce DNS-based security policies for Kubernetes pods.
If you haven’t read the Introduction to Cilium & Hubble yet, we’d encourage you to do that first.
The best way to get help if you get stuck is to ask a question on the Cilium Slack channel. With Cilium contributors across the globe, there is almost always someone available to help.
Setup Cilium
If you have not set up Cilium yet, follow the guide Quick Installation for instructions on how to quickly bootstrap a Kubernetes cluster and install Cilium. If in doubt, pick the minikube route, you will be good to go in less than 5 minutes.
Deploy the Demo Application
DNS-based policies are very useful for controlling access to services running outside the Kubernetes cluster. DNS acts as a persistent service identifier for both external services provided by AWS, Google, Twilio, Stripe, etc., and internal services such as database clusters running in private subnets outside Kubernetes. CIDR or IP-based policies are cumbersome and hard to maintain as the IPs associated with external services can change frequently. The Cilium DNS-based policies provide an easy mechanism to specify access control while Cilium manages the harder aspects of tracking DNS to IP mapping.
In this guide we will learn about:
- Controlling egress access to services outside the cluster using DNS-based policies
- Using patterns (or wildcards) to whitelist a subset of DNS domains
- Combining DNS, port and L7 rules for restricting access to external service
In line with our Star Wars theme examples, we will use a simple scenario where the Empire’s mediabot
pods need access to Twitter for managing the Empire’s tweets. The pods shouldn’t have access to any other external service.
kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.10/examples/kubernetes-dns/dns-sw-app.yaml
kubectl wait pod/mediabot --for=condition=Ready
kubectl get po
NAME READY STATUS RESTARTS AGE
pod/mediabot 1/1 Running 0 14s
Apply DNS Egress Policy
The following Cilium network policy allows mediabot
pods to only access api.twitter.com
.
Generic
OpenShift
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "fqdn"
spec:
endpointSelector:
matchLabels:
org: empire
class: mediabot
egress:
- toFQDNs:
- matchName: "api.twitter.com"
- toEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": kube-system
"k8s:k8s-app": kube-dns
toPorts:
- ports:
- port: "53"
protocol: ANY
rules:
dns:
- matchPattern: "*"
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "fqdn"
spec:
endpointSelector:
matchLabels:
org: empire
class: mediabot
egress:
- toFQDNs:
- matchName: "api.twitter.com"
- toEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": openshift-dns
toPorts:
- ports:
- port: "5353"
protocol: ANY
rules:
dns:
- matchPattern: "*"
Note
OpenShift users will need to modify the policies to match the namespace openshift-dns
(instead of kube-system
), remove the match on the k8s:k8s-app=kube-dns
label, and change the port to 5353.
Let’s take a closer look at the policy:
- The first egress section uses
toFQDNs: matchName
specification to allow egress toapi.twitter.com
. The destination DNS should match exactly the name specified in the rule. TheendpointSelector
allows only pods with labelsclass: mediabot, org:empire
to have the egress access. - The second egress section (
toEndpoints
) allowsmediabot
pods to accesskube-dns
service. Note thatrules: dns
instructs Cilium to inspect and allow DNS lookups matching specified patterns. In this case, inspect and allow all DNS queries.
Note that with this policy the mediabot
doesn’t have access to any internal cluster service other than kube-dns
. Refer to Network Policy to learn more about policies for controlling access to internal cluster services.
Let’s apply the policy:
kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.10/examples/kubernetes-dns/dns-matchname.yaml
Testing the policy, we see that mediabot
has access to api.twitter.com
but doesn’t have access to any other external service, e.g., help.twitter.com
.
$ kubectl exec mediabot -- curl -I https://api.twitter.com | head -1
HTTP/1.1 404 Not Found
$ kubectl exec mediabot -- curl -I --max-time 5 https://help.twitter.com | head -1
curl: (28) Connection timed out after 5000 milliseconds
command terminated with exit code 28
DNS Policies Using Patterns
The above policy controlled DNS access based on exact match of the DNS domain name. Often, it is required to allow access to a subset of domains. Let’s say, in the above example, mediabot
pods need access to any Twitter sub-domain, e.g., the pattern *.twitter.com
. We can achieve this easily by changing the toFQDN
rule to use matchPattern
instead of matchName
.
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "fqdn"
spec:
endpointSelector:
matchLabels:
org: empire
class: mediabot
egress:
- toFQDNs:
- matchPattern: "*.twitter.com"
- toEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": kube-system
"k8s:k8s-app": kube-dns
toPorts:
- ports:
- port: "53"
protocol: ANY
rules:
dns:
- matchPattern: "*"
kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.10/examples/kubernetes-dns/dns-pattern.yaml
Test that mediabot
has access to multiple Twitter services for which the DNS matches the pattern *.twitter.com
. It is important to note and test that this doesn’t allow access to twitter.com
because the *.
in the pattern requires one subdomain to be present in the DNS name. You can simply add more matchName
and matchPattern
clauses to extend the access. (See DNS based policies to learn more about specifying DNS rules using patterns and names.)
$ kubectl exec mediabot -- curl -I https://help.twitter.com | head -1
HTTP/1.1 302 Found
$ kubectl exec mediabot -- curl -I https://about.twitter.com | head -1
HTTP/1.1 200 OK
$ kubectl exec mediabot -- curl -I --max-time 5 https://twitter.com | head -1
curl: (7) Failed to connect to twitter.com port 443: Operation timed out
command terminated with exit code 7
Combining DNS, Port and L7 Rules
The DNS-based policies can be combined with port (L4) and API (L7) rules to further restrict the access. In our example, we will restrict mediabot
pods to access Twitter services only on ports 443
. The toPorts
section in the policy below achieves the port-based restrictions along with the DNS-based policies.
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "fqdn"
spec:
endpointSelector:
matchLabels:
org: empire
class: mediabot
egress:
- toFQDNs:
- matchPattern: "*.twitter.com"
toPorts:
- ports:
- port: "443"
protocol: TCP
- toEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": kube-system
"k8s:k8s-app": kube-dns
toPorts:
- ports:
- port: "53"
protocol: ANY
rules:
dns:
- matchPattern: "*"
kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.10/examples/kubernetes-dns/dns-port.yaml
Testing, the access to https://help.twitter.com
on port 443
will succeed but the access to http://help.twitter.com
on port 80
will be denied.
$ kubectl exec mediabot -- curl -I https://help.twitter.com | head -1
HTTP/1.1 302 Found
$ kubectl exec mediabot -- curl -I --max-time 5 http://help.twitter.com | head -1
curl: (28) Connection timed out after 5001 milliseconds
command terminated with exit code 28
Refer to Layer 4 Examples and Layer 7 Examples to learn more about Cilium L4 and L7 network policies.
Clean-up
kubectl delete -f https://raw.githubusercontent.com/cilium/cilium/v1.10/examples/kubernetes-dns/dns-sw-app.yaml
kubectl delete cnp fqdn