Bringing traffic to the cluster
There are downsides to running Kubernetes outside of well integrated platforms such as AWS or GCE. One of those is the lack of external ingress and load balancing solutions. Fortunately, it’s fairly easy to get an NGINX powered ingress controller running inside the cluster, which will enable services to register for receiving public traffic.
Ingress controller setup
Because there’s no load balancer available with most cloud providers, we have to make sure the NGINX server is always running on the same host, accessible via an IP address that doesn’t change. As our master node is pretty much idle at this point, and no ordinary pods will get scheduled on it, we make kube1 our dedicated host for routing public traffic.
We already opened port 80 and 443 during the initial firewall configuration, now all we have to do is to write a couple of manifests to deploy the NGINX ingress controller on kube1:
One part requires special attention. In order to make sure NGINX runs on kube1—which is a tainted master node and no pods will normally be scheduled on it—we need to specify a toleration:
# from ingress/deployment.yml
tolerations:
- key: node-role.kubernetes.io/master
operator: Equal
effect: NoSchedule
Specifying a toleration doesn’t make sure that a pod is getting scheduled on any specific node. For this we need to add a node affinity rule. As we have just a single master node, the following specification is enough to schedule a pod on kube1:
# from ingress/deployment.yml
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/master
operator: Exists
Running kubectl apply -f ingress/
will apply all manifests in this folder. First, a namespace called ingress is created, followed by the NGINX deployment, plus a default backend to serve 404 pages for undefined domains and routes including the necessary service object. There’s no need to define a service object for NGINX itself, because we configure it to use the host network (hostNetwork: true
), which means that the container is bound to the actual ports on the host, not to some virtual interface within the pod overlay network.
Services are now able to make use of the ingress controller and receive public traffic with a simple manifest:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: service.example.com
http:
paths:
- path: /
backend:
serviceName: example-service
servicePort: example-service-http
The NGINX ingress controller is quite flexible and supports a whole bunch of configuration options.
DNS records
dns/cloudflare
dns/google
dns/aws
At this point we could use a domain name and put some DNS entries into place. To serve web traffic it’s enough to create an A record pointing to the public IP address of kube1 plus a wildcard entry to be able to use subdomains:
Type | Name | Value |
---|---|---|
A | example.com | |
CNAME | *.example.com | example.com |
Once the DNS entries are propagated our example service would be accessible at http://service.example.com
. If you don’t have a domain name at hand, you can always add an entry to your hosts file instead.
Additionally, it might be a good idea to assign a subdomain to each host, e.g. kube1.example.com. It’s way more comfortable to ssh into a host using a domain name instead of an IP address.
Obtaining SSL/TLS certificates
Thanks to Let’s Encrypt and a project called kube-lego it’s incredibly easy to obtain free certificates for any domain name pointing at our Kubernetes cluster. Setting this service up takes no time and it plays well with the NGINX ingress controller we deployed earlier. These are the related manifests:
Before deploying kube-lego using the manifests above, make sure to replace the email address in ingress/tls/configmap.yml
with your own.
To enable certificates for a service, the ingress manifest needs to be slightly extended:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/tls-acme: "true" # enable certificates
kubernetes.io/ingress.class: "nginx"
spec:
tls: # specify domains to fetch certificates for
- hosts:
- service.example.com
secretName: example-service-tls
rules:
- host: service.example.com
http:
paths:
- path: /
backend:
serviceName: example-service
servicePort: example-service-http
After applying this manifest, kube-lego will try to obtain a certificate for service.example.com and reload the NGINX configuration to enable TLS. Make sure to check the logs of the kube-lego pod if something goes wrong.
NGINX will automatically redirect clients to HTTPS whenever TLS is enabled. In case you still want to serve traffic on HTTP, add ingress.kubernetes.io/ssl-redirect: "false"
to the list of annotations.
Deploying the Kubernetes Dashboard
Now that everything is in place, we are able to expose services on specific domains and automatically obtain certificates for them. Let’s try this out by deploying the Kubernetes Dashboard with the following manifests:
Optionally, the following manifests can be used to get resource utilization graphs within the dashboard using heapster:
What’s new here is that we enable basic authentication to restrict access to the dashboard. The following annotations are supported by the NGINX ingress controller, and may or may not work with other solutions:
# from dashboard/ingress.yml
annotations:
# ...
ingress.kubernetes.io/auth-type: basic
ingress.kubernetes.io/auth-secret: kubernetes-dashboard-auth
ingress.kubernetes.io/auth-realm: "Authentication Required"
# dashboard/secret.yml
apiVersion: v1
kind: Secret
metadata:
name: kubernetes-dashboard-auth
namespace: kube-system
data:
auth: YWRtaW46JGFwcjEkV3hBNGpmQmkkTHYubS9PdzV5Y1RFMXMxMWNMYmJpLw==
type: Opaque
This example will prompt a visitor to enter their credentials (user: admin / password: test) when accessing the dashboard. Secrets for basic authentication can be created using htpasswd
, and need to be added to the manifest as a base64 encoded string.