Egress Gateway (beta)
Note
This is a beta feature. Please provide feedback and file a GitHub issue if you experience any problems.
The egress gateway allows users to redirect egress pod traffic through specific, gateway nodes. Packets are masqueraded to the gateway node IP.
This document explains how to enable the egress gateway and configure egress NAT policies to route and SNAT the egress traffic for a specific workload.
Note
This guide assumes that Cilium has been correctly installed in your Kubernetes cluster. Please see Quick Installation for more information. If unsure, run cilium status
and validate that Cilium is up and running.
Enable Egress Gateway
The feature is disabled by default. Enable the feature:
Helm
ConfigMap
If you installed Cilium via helm install
, you may enable the Egress gateway feature with the following command:
helm upgrade cilium cilium/cilium --version 1.10.2 \
--namespace kube-system \
--reuse-values \
--set egressGateway.enabled=true \
--set bpf.masquerade=true \
--set kubeProxyReplacement=strict
Egress Gateway support can be enabled by setting the following options in the cilium-config
ConfigMap:
enable-egress-gateway: true
enable-bpf-masquerade: true
kube-proxy-replacement: strict
Create an External Service (Optional)
This feature will change the default behavior how a packet leaves a cluster. As a result, from the external service’s point of view, it will see different source IP address from the cluster. If you don’t have an external service to experiment with, nginx is a very simple example that can demonstrate the functionality, while nginx’s access log shows which IP address the request is coming from.
Create an nginx service on a Linux node that is external to the existing Kubernetes cluster, and use it as the destination of the egress traffic.
$ # Install and start nginx
$ sudo apt install nginx
$ sudo systemctl start nginx
$ # Make sure the service is started and listens on port :80
$ sudo systemctl status nginx
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2021-04-04 21:58:57 UTC; 1min 3s ago
[...]
$ curl http://192.168.33.13:80 # Assume 192.168.33.13 is the external IP of the node
[...]
<title>Welcome to nginx!</title>
[...]
Create Client Pods
Deploy a client pod that will generate traffic which will be redirected based on the configurations specified in the CiliumEgressNATPolicy.
$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.10/examples/kubernetes-dns/dns-sw-app.yaml
$ kubectl get po
NAME READY STATUS RESTARTS AGE
pod/mediabot 1/1 Running 0 14s
$ kubectl exec mediabot -- curl http://192.168.33.13:80
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
[...]
Verify access log from nginx node or other external services that the request is coming from one of the node in Kubernetes cluster. For example, in nginx node, the access log will contain something like the following:
$ tail /var/log/nginx/access.log
[...]
192.168.33.11 - - [04/Apr/2021:22:06:57 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.52.1"
In the previous example, the client pod is running on the node 192.168.33.11
, so the result makes sense. This is the default Kubernetes behavior without egress NAT.
Configure Egress IPs
Deploy the following deployment to assign additional egress IP to the gateway node. The node that runs the pod will have additional IP addresses configured on the external interface (enp0s8
as in the example), and become the egress gateway. In the following example, 192.168.33.100
and 192.168.33.101
becomes the egress IP which can be consumed by Egress NAT Policy. Please make sure these IP addresses are routable on the interface they are assigned to, otherwise the return traffic won’t be able to route back.
apiVersion: apps/v1
kind: Deployment
metadata:
name: "egress-ip-assign"
labels:
name: "egress-ip-assign"
spec:
replicas: 1
selector:
matchLabels:
name: "egress-ip-assign"
template:
metadata:
labels:
name: "egress-ip-assign"
spec:
affinity:
# the following pod affinity ensures that the "egress-ip-assign" pod
# runs on the same node as the mediabot pod
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: class
operator: In
values:
- mediabot
- key: org
operator: In
values:
- empire
topologyKey: "kubernetes.io/hostname"
hostNetwork: true
containers:
- name: egress-ip
image: docker.io/library/busybox:1.31.1
command: ["/bin/sh","-c"]
securityContext:
privileged: true
env:
- name: EGRESS_IPS
value: "192.168.33.100/24 192.168.33.101/24"
args:
- "for i in $EGRESS_IPS; do ip address add $i dev enp0s8; done; sleep 10000000"
lifecycle:
preStop:
exec:
command:
- "/bin/sh"
- "-c"
- "for i in $EGRESS_IPS; do ip address del $i dev enp0s8; done"
Create Egress NAT Policy
Apply the following Egress NAT Policy, which basically means: when the pod is running in the namespace default
and the pod itself has label org: empire
and class: mediabot
, if it’s trying to talk to IP CIDR 192.168.33.13/32
, then use egress IP 192.168.33.100
. In this example, it tells Cilium to forward the packet from client pod to the gateway node with egress IP 192.168.33.100
, and masquerade with that IP address.
apiVersion: cilium.io/v2alpha1
kind: CiliumEgressNATPolicy
metadata:
name: egress-sample
spec:
egress:
- podSelector:
matchLabels:
org: empire
class: mediabot
# The following label selects default namespace
io.kubernetes.pod.namespace: default
# Or use namespace label selector to select multiple namespaces
# namespaceSelector:
# matchLabels:
# ns: default
destinationCIDRs:
- 192.168.33.13/32
egressSourceIP: "192.168.33.100"
Let’s switch back to the client pod and verify it works.
$ kubectl exec mediabot -- curl http://192.168.33.13:80
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
[...]
Verify access log from nginx node or service of your chose that the request is coming from egress IP now instead of one of the nodes in Kubernetes cluster. In the nginx’s case, you will see logs like the following shows that the request is coming from 192.168.33.100
now, instead of 192.168.33.11
.
$ tail /var/log/nginx/access.log
[...]
192.168.33.100 - - [04/Apr/2021:22:06:57 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.52.1"