Kubernetes Ingress Controller

This guide explains how to use Traefik as an Ingress controller for a Kubernetes cluster.

If you are not familiar with Ingresses in Kubernetes you might want to read the Kubernetes user guide

The config files used in this guide can be found in the examples directory

Prerequisites

  • A working Kubernetes cluster. If you want to follow along with this guide, you should setup minikube on your machine, as it is the quickest way to get a local Kubernetes cluster setup for experimentation and development.

Note

The guide is likely not fully adequate for a production-ready setup.

Role Based Access Control configuration (Kubernetes 1.6+ only)

Kubernetes introduces Role Based Access Control (RBAC) in 1.6+ to allow fine-grained control of Kubernetes resources and API.

If your cluster is configured with RBAC, you will need to authorize Traefik to use the Kubernetes API. There are two ways to set up the proper permission: Via namespace-specific RoleBindings or a single, global ClusterRoleBinding.

RoleBindings per namespace enable to restrict granted permissions to the very namespaces only that Traefik is watching over, thereby following the least-privileges principle. This is the preferred approach if Traefik is not supposed to watch all namespaces, and the set of namespaces does not change dynamically. Otherwise, a single ClusterRoleBinding must be employed.

Note

RoleBindings per namespace are available in Traefik 1.5 and later. Please use ClusterRoleBindings for older versions.

For the sake of simplicity, this guide will use a ClusterRoleBinding:

  1. ---
  2. kind: ClusterRole
  3. apiVersion: rbac.authorization.k8s.io/v1beta1
  4. metadata:
  5. name: traefik-ingress-controller
  6. rules:
  7. - apiGroups:
  8. - ""
  9. resources:
  10. - services
  11. - endpoints
  12. - secrets
  13. verbs:
  14. - get
  15. - list
  16. - watch
  17. - apiGroups:
  18. - extensions
  19. resources:
  20. - ingresses
  21. verbs:
  22. - get
  23. - list
  24. - watch
  25. - apiGroups:
  26. - extensions
  27. resources:
  28. - ingresses/status
  29. verbs:
  30. - update
  31. ---
  32. kind: ClusterRoleBinding
  33. apiVersion: rbac.authorization.k8s.io/v1beta1
  34. metadata:
  35. name: traefik-ingress-controller
  36. roleRef:
  37. apiGroup: rbac.authorization.k8s.io
  38. kind: ClusterRole
  39. name: traefik-ingress-controller
  40. subjects:
  41. - kind: ServiceAccount
  42. name: traefik-ingress-controller
  43. namespace: kube-system

examples/k8s/traefik-rbac.yaml

  1. kubectl apply -f https://raw.githubusercontent.com/containous/traefik/v1.7/examples/k8s/traefik-rbac.yaml

For namespaced restrictions, one RoleBinding is required per watched namespace along with a corresponding configuration of Traefik's kubernetes.namespaces parameter.

Deploy Traefik using a Deployment or DaemonSet

It is possible to use Traefik with a Deployment or a DaemonSet object, whereas both options have their own pros and cons:

  • The scalability can be much better when using a Deployment, because you will have a Single-Pod-per-Node model when using a DaemonSet, whereas you may need less replicas based on your environment when using a Deployment.
  • DaemonSets automatically scale to new nodes, when the nodes join the cluster, whereas Deployment pods are only scheduled on new nodes if required.
  • DaemonSets ensure that only one replica of pods run on any single node. Deployments require affinity settings if you want to ensure that two pods don't end up on the same node.
  • DaemonSets can be run with the NET_BIND_SERVICE capability, which will allow it to bind to port 80/443/etc on each host. This will allow bypassing the kube-proxy, and reduce traffic hops. Note that this is against the Kubernetes Best Practices Guidelines, and raises the potential for scheduling/scaling issues. Despite potential issues, this remains the choice for most ingress controllers.
  • If you are unsure which to choose, start with the Daemonset.

The Deployment objects looks like this:

  1. ---
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. name: traefik-ingress-controller
  6. namespace: kube-system
  7. ---
  8. kind: Deployment
  9. apiVersion: apps/v1
  10. metadata:
  11. name: traefik-ingress-controller
  12. namespace: kube-system
  13. labels:
  14. k8s-app: traefik-ingress-lb
  15. spec:
  16. replicas: 1
  17. selector:
  18. matchLabels:
  19. k8s-app: traefik-ingress-lb
  20. template:
  21. metadata:
  22. labels:
  23. k8s-app: traefik-ingress-lb
  24. name: traefik-ingress-lb
  25. spec:
  26. serviceAccountName: traefik-ingress-controller
  27. terminationGracePeriodSeconds: 60
  28. containers:
  29. - image: traefik:v1.7
  30. name: traefik-ingress-lb
  31. ports:
  32. - name: http
  33. containerPort: 80
  34. - name: admin
  35. containerPort: 8080
  36. args:
  37. - --api
  38. - --kubernetes
  39. - --logLevel=INFO
  40. ---
  41. kind: Service
  42. apiVersion: v1
  43. metadata:
  44. name: traefik-ingress-service
  45. namespace: kube-system
  46. spec:
  47. selector:
  48. k8s-app: traefik-ingress-lb
  49. ports:
  50. - protocol: TCP
  51. port: 80
  52. name: web
  53. - protocol: TCP
  54. port: 8080
  55. name: admin
  56. type: NodePort

examples/k8s/traefik-deployment.yaml

Note

The Service will expose two NodePorts which allow access to the ingress and the web interface.

The DaemonSet objects looks not much different:

  1. ---
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. name: traefik-ingress-controller
  6. namespace: kube-system
  7. ---
  8. kind: DaemonSet
  9. apiVersion: apps/v1
  10. metadata:
  11. name: traefik-ingress-controller
  12. namespace: kube-system
  13. labels:
  14. k8s-app: traefik-ingress-lb
  15. spec:
  16. selector:
  17. matchLabels:
  18. k8s-app: traefik-ingress-lb
  19. name: traefik-ingress-lb
  20. template:
  21. metadata:
  22. labels:
  23. k8s-app: traefik-ingress-lb
  24. name: traefik-ingress-lb
  25. spec:
  26. serviceAccountName: traefik-ingress-controller
  27. terminationGracePeriodSeconds: 60
  28. containers:
  29. - image: traefik:v1.7
  30. name: traefik-ingress-lb
  31. ports:
  32. - name: http
  33. containerPort: 80
  34. hostPort: 80
  35. - name: admin
  36. containerPort: 8080
  37. hostPort: 8080
  38. securityContext:
  39. capabilities:
  40. drop:
  41. - ALL
  42. add:
  43. - NET_BIND_SERVICE
  44. args:
  45. - --api
  46. - --kubernetes
  47. - --logLevel=INFO
  48. ---
  49. kind: Service
  50. apiVersion: v1
  51. metadata:
  52. name: traefik-ingress-service
  53. namespace: kube-system
  54. spec:
  55. selector:
  56. k8s-app: traefik-ingress-lb
  57. ports:
  58. - protocol: TCP
  59. port: 80
  60. name: web
  61. - protocol: TCP
  62. port: 8080
  63. name: admin

examples/k8s/traefik-ds.yaml

Note

This will create a Daemonset that uses privileged ports 80/8080 on the host. This may not work on all providers, but illustrates the static (non-NodePort) hostPort binding. The traefik-ingress-service can still be used inside the cluster to access the DaemonSet pods.

To deploy Traefik to your cluster start by submitting one of the YAML files to the cluster with kubectl:

  1. kubectl apply -f https://raw.githubusercontent.com/containous/traefik/v1.7/examples/k8s/traefik-deployment.yaml
  1. kubectl apply -f https://raw.githubusercontent.com/containous/traefik/v1.7/examples/k8s/traefik-ds.yaml

There are some significant differences between using Deployments and DaemonSets:

  • The Deployment has easier up and down scaling possibilities. It can implement full pod lifecycle and supports rolling updates from Kubernetes 1.2. At least one Pod is needed to run the Deployment.
  • The DaemonSet automatically scales to all nodes that meets a specific selector and guarantees to fill nodes one at a time. Rolling updates are fully supported from Kubernetes 1.7 for DaemonSets as well.

Check the Pods

Now lets check if our command was successful.

Start by listing the pods in the kube-system namespace:

  1. kubectl --namespace=kube-system get pods
  1. NAME READY STATUS RESTARTS AGE
  2. kube-addon-manager-minikubevm 1/1 Running 0 4h
  3. kubernetes-dashboard-s8krj 1/1 Running 0 4h
  4. traefik-ingress-controller-678226159-eqseo 1/1 Running 0 7m

You should see that after submitting the Deployment or DaemonSet to Kubernetes it has launched a Pod, and it is now running. It might take a few moments for Kubernetes to pull the Traefik image and start the container.

Note

You could also check the deployment with the Kubernetes dashboard, run minikube dashboard to open it in your browser, then choose the kube-system namespace from the menu at the top right of the screen.

You should now be able to access Traefik on port 80 of your Minikube instance when using the DaemonSet:

  1. curl $(minikube ip)
  1. 404 page not found

If you decided to use the deployment, then you need to target the correct NodePort, which can be seen when you execute kubectl get services --namespace=kube-system.

  1. curl $(minikube ip):<NODEPORT>
  1. 404 page not found

Note

We expect to see a 404 response here as we haven't yet given Traefik any configuration.

All further examples below assume a DaemonSet installation. Deployment users will need to append the NodePort when constructing requests.

Deploy Traefik using Helm Chart

Note

The Helm Chart is maintained by the community, not the Traefik project maintainers.

Instead of installing Traefik via Kubernetes object directly, you can also use the Traefik Helm chart.

Install the Traefik chart by:

  1. helm install stable/traefik

Install the Traefik chart using a values.yaml file.

  1. helm install --values values.yaml stable/traefik
  1. dashboard:
  2. enabled: true
  3. domain: traefik-ui.minikube
  4. kubernetes:
  5. namespaces:
  6. - default
  7. - kube-system

For more information, check out the documentation.

Submitting an Ingress to the Cluster

Lets start by creating a Service and an Ingress that will expose the Traefik Web UI.

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: traefik-web-ui
  5. namespace: kube-system
  6. spec:
  7. selector:
  8. k8s-app: traefik-ingress-lb
  9. ports:
  10. - name: web
  11. port: 80
  12. targetPort: 8080
  13. ---
  14. apiVersion: extensions/v1beta1
  15. kind: Ingress
  16. metadata:
  17. name: traefik-web-ui
  18. namespace: kube-system
  19. spec:
  20. rules:
  21. - host: traefik-ui.minikube
  22. http:
  23. paths:
  24. - path: /
  25. backend:
  26. serviceName: traefik-web-ui
  27. servicePort: web

examples/k8s/ui.yaml

  1. kubectl apply -f https://raw.githubusercontent.com/containous/traefik/v1.7/examples/k8s/ui.yaml

Now let's setup an entry in our /etc/hosts file to route traefik-ui.minikube to our cluster.

In production you would want to set up real DNS entries. You can get the IP address of your minikube instance by running minikube ip:

  1. echo "$(minikube ip) traefik-ui.minikube" | sudo tee -a /etc/hosts

We should now be able to visit traefik-ui.minikube in the browser and view the Traefik web UI.

Add a TLS Certificate to the Ingress

Note

For this example to work you need a TLS entrypoint. You don't have to provide a TLS certificate at this point. For more details see here.

You can add a TLS entrypoint by adding the following args to the container spec:

  1. --defaultentrypoints=http,https
  2. --entrypoints=Name:https Address::443 TLS
  3. --entrypoints=Name:http Address::80

Now let's add the TLS port either to the deployment:

  1. ports:
  2. - name: https
  3. containerPort: 443

or to the daemon set:

  1. ports:
  2. - name: https
  3. containerPort: 443
  4. hostPort: 443

To setup an HTTPS-protected ingress, you can leverage the TLS feature of the ingress resource.

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. name: traefik-web-ui
  5. namespace: kube-system
  6. annotations:
  7. kubernetes.io/ingress.class: traefik
  8. spec:
  9. rules:
  10. - host: traefik-ui.minikube
  11. http:
  12. paths:
  13. - backend:
  14. serviceName: traefik-web-ui
  15. servicePort: 80
  16. tls:
  17. - secretName: traefik-ui-tls-cert

In addition to the modified ingress you need to provide the TLS certificate via a Kubernetes secret in the same namespace as the ingress. The following two commands will generate a new certificate and create a secret containing the key and cert files.

  1. openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=traefik-ui.minikube"
  2. kubectl -n kube-system create secret tls traefik-ui-tls-cert --key=tls.key --cert=tls.crt

If there are any errors while loading the TLS section of an ingress, the whole ingress will be skipped.

Note

The secret must have two entries named tls.keyand tls.crt. See the Kubernetes documentation for more details.

Note

The TLS certificates will be added to all entrypoints defined by the ingress annotation traefik.frontend.entryPoints. If no such annotation is provided, the TLS certificates will be added to all TLS-enabled defaultEntryPoints.

Note

The field hosts in the TLS configuration is ignored. Instead, the domains provided by the certificate are used for this purpose. It is recommended to not use wildcard certificates as they will match globally.

Basic Authentication

It's possible to protect access to Traefik through basic authentication. (See the Kubernetes Ingress configuration page for syntactical details and restrictions.)

Creating the Secret

A. Use htpasswd to create a file containing the username and the MD5-encoded password:

  1. htpasswd -c ./auth myusername

You will be prompted for a password which you will have to enter twice. htpasswd will create a file with the following:

  1. cat auth
  1. myusername:$apr1$78Jyn/1K$ERHKVRPPlzAX8eBtLuvRZ0

B. Now use kubectl to create a secret in the monitoring namespace using the file created by htpasswd.

  1. kubectl create secret generic mysecret --from-file auth --namespace=monitoring

Note

Secret must be in same namespace as the Ingress object.

C. Attach the following annotations to the Ingress object:

  • traefik.ingress.kubernetes.io/auth-type: "basic"
  • traefik.ingress.kubernetes.io/auth-secret: "mysecret"

They specify basic authentication and reference the Secret mysecret containing the credentials.

Following is a full Ingress example based on Prometheus:

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. name: prometheus-dashboard
  5. namespace: monitoring
  6. annotations:
  7. kubernetes.io/ingress.class: traefik
  8. traefik.ingress.kubernetes.io/auth-type: "basic"
  9. traefik.ingress.kubernetes.io/auth-secret: "mysecret"
  10. spec:
  11. rules:
  12. - host: dashboard.prometheus.example.com
  13. http:
  14. paths:
  15. - backend:
  16. serviceName: prometheus
  17. servicePort: 9090

You can apply the example as following:

  1. kubectl create -f prometheus-ingress.yaml -n monitoring

Name-based Routing

In this example we are going to setup websites for three of the United Kingdoms best loved cheeses: Cheddar, Stilton, and Wensleydale.

First lets start by launching the pods for the cheese websites.

  1. ---
  2. kind: Deployment
  3. apiVersion: apps/v1
  4. metadata:
  5. name: stilton
  6. labels:
  7. app: cheese
  8. cheese: stilton
  9. spec:
  10. replicas: 2
  11. selector:
  12. matchLabels:
  13. app: cheese
  14. task: stilton
  15. template:
  16. metadata:
  17. labels:
  18. app: cheese
  19. task: stilton
  20. version: v0.0.1
  21. spec:
  22. containers:
  23. - name: cheese
  24. image: errm/cheese:stilton
  25. ports:
  26. - containerPort: 80
  27. ---
  28. kind: Deployment
  29. apiVersion: apps/v1
  30. metadata:
  31. name: cheddar
  32. labels:
  33. app: cheese
  34. cheese: cheddar
  35. spec:
  36. replicas: 2
  37. selector:
  38. matchLabels:
  39. app: cheese
  40. task: cheddar
  41. template:
  42. metadata:
  43. labels:
  44. app: cheese
  45. task: cheddar
  46. version: v0.0.1
  47. spec:
  48. containers:
  49. - name: cheese
  50. image: errm/cheese:cheddar
  51. ports:
  52. - containerPort: 80
  53. ---
  54. kind: Deployment
  55. apiVersion: apps/v1
  56. metadata:
  57. name: wensleydale
  58. labels:
  59. app: cheese
  60. cheese: wensleydale
  61. spec:
  62. replicas: 2
  63. selector:
  64. matchLabels:
  65. app: cheese
  66. task: wensleydale
  67. template:
  68. metadata:
  69. labels:
  70. app: cheese
  71. task: wensleydale
  72. version: v0.0.1
  73. spec:
  74. containers:
  75. - name: cheese
  76. image: errm/cheese:wensleydale
  77. ports:
  78. - containerPort: 80

examples/k8s/cheese-deployments.yaml

  1. kubectl apply -f https://raw.githubusercontent.com/containous/traefik/v1.7/examples/k8s/cheese-deployments.yaml

Next we need to setup a Service for each of the cheese pods.

  1. ---
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. name: stilton
  6. spec:
  7. ports:
  8. - name: http
  9. targetPort: 80
  10. port: 80
  11. selector:
  12. app: cheese
  13. task: stilton
  14. ---
  15. apiVersion: v1
  16. kind: Service
  17. metadata:
  18. name: cheddar
  19. spec:
  20. ports:
  21. - name: http
  22. targetPort: 80
  23. port: 80
  24. selector:
  25. app: cheese
  26. task: cheddar
  27. ---
  28. apiVersion: v1
  29. kind: Service
  30. metadata:
  31. name: wensleydale
  32. annotations:
  33. traefik.backend.circuitbreaker: "NetworkErrorRatio() > 0.5"
  34. spec:
  35. ports:
  36. - name: http
  37. targetPort: 80
  38. port: 80
  39. selector:
  40. app: cheese
  41. task: wensleydale

Note

We also set a circuit breaker expression for one of the backends by setting the traefik.backend.circuitbreaker annotation on the service.

examples/k8s/cheese-services.yaml

  1. kubectl apply -f https://raw.githubusercontent.com/containous/traefik/v1.7/examples/k8s/cheese-services.yaml

Now we can submit an ingress for the cheese websites.

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. name: cheese
  5. annotations:
  6. kubernetes.io/ingress.class: traefik
  7. spec:
  8. rules:
  9. - host: stilton.minikube
  10. http:
  11. paths:
  12. - path: /
  13. backend:
  14. serviceName: stilton
  15. servicePort: http
  16. - host: cheddar.minikube
  17. http:
  18. paths:
  19. - path: /
  20. backend:
  21. serviceName: cheddar
  22. servicePort: http
  23. - host: wensleydale.minikube
  24. http:
  25. paths:
  26. - path: /
  27. backend:
  28. serviceName: wensleydale
  29. servicePort: http

examples/k8s/cheese-ingress.yaml

Note

We list each hostname, and add a backend service.

  1. kubectl apply -f https://raw.githubusercontent.com/containous/traefik/v1.7/examples/k8s/cheese-ingress.yaml

Now visit the Traefik dashboard and you should see a frontend for each host. Along with a backend listing for each service with a server set up for each pod.

If you edit your /etc/hosts again you should be able to access the cheese websites in your browser.

  1. echo "$(minikube ip) stilton.minikube cheddar.minikube wensleydale.minikube" | sudo tee -a /etc/hosts

Path-based Routing

Now lets suppose that our fictional client has decided that while they are super happy about our cheesy web design, when they asked for 3 websites they had not really bargained on having to buy 3 domain names.

No problem, we say, why don't we reconfigure the sites to host all 3 under one domain.

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. name: cheeses
  5. annotations:
  6. kubernetes.io/ingress.class: traefik
  7. traefik.frontend.rule.type: PathPrefixStrip
  8. spec:
  9. rules:
  10. - host: cheeses.minikube
  11. http:
  12. paths:
  13. - path: /stilton
  14. backend:
  15. serviceName: stilton
  16. servicePort: http
  17. - path: /cheddar
  18. backend:
  19. serviceName: cheddar
  20. servicePort: http
  21. - path: /wensleydale
  22. backend:
  23. serviceName: wensleydale
  24. servicePort: http

examples/k8s/cheeses-ingress.yaml

Note

We are configuring Traefik to strip the prefix from the url path with the traefik.frontend.rule.type annotation so that we can use the containers from the previous example without modification.

  1. kubectl apply -f https://raw.githubusercontent.com/containous/traefik/v1.7/examples/k8s/cheeses-ingress.yaml
  1. echo "$(minikube ip) cheeses.minikube" | sudo tee -a /etc/hosts

You should now be able to visit the websites in your browser.

Multiple Ingress Definitions for the Same Host (or Host+Path)

Traefik will merge multiple Ingress definitions for the same host/path pair into one definition.

Let's say the number of cheese services is growing. It is now time to move the cheese services to a dedicated cheese namespace to simplify the managements of cheese and non-cheese services.

Simply deploy a new Ingress Object with the same host an path into the cheese namespace:

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. name: cheese
  5. namespace: cheese
  6. annotations:
  7. kubernetes.io/ingress.class: traefik
  8. traefik.frontend.rule.type: PathPrefixStrip
  9. spec:
  10. rules:
  11. - host: cheese.minikube
  12. http:
  13. paths:
  14. - path: /cheddar
  15. backend:
  16. serviceName: cheddar
  17. servicePort: http

Traefik will now look for cheddar service endpoints (ports on healthy pods) in both the cheese and the default namespace. Deploying cheddar into the cheese namespace and afterwards shutting down cheddar in the default namespace is enough to migrate the traffic.

Note

The kubernetes documentation does not specify this merging behavior.

Note

Merging ingress definitions can cause problems if the annotations differ or if the services handle requests differently. Be careful and extra cautious when running multiple overlapping ingress definitions.

Specifying Routing Priorities

Sometimes you need to specify priority for ingress routes, especially when handling wildcard routes. This can be done by adding the traefik.frontend.priority annotation, i.e.:

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. name: wildcard-cheeses
  5. annotations:
  6. traefik.frontend.priority: "1"
  7. spec:
  8. rules:
  9. - host: *.minikube
  10. http:
  11. paths:
  12. - path: /
  13. backend:
  14. serviceName: stilton
  15. servicePort: http
  16. kind: Ingress
  17. metadata:
  18. name: specific-cheeses
  19. annotations:
  20. traefik.frontend.priority: "2"
  21. spec:
  22. rules:
  23. - host: specific.minikube
  24. http:
  25. paths:
  26. - path: /
  27. backend:
  28. serviceName: stilton
  29. servicePort: http

Note that priority values must be quoted to avoid numeric interpretation (which are illegal for annotations).

Forwarding to ExternalNames

When specifying an ExternalName, Traefik will forward requests to the given host accordingly and use HTTPS when the Service port matches 443. This still requires setting up a proper port mapping on the Service from the Ingress port to the (external) Service port.

Disable passing the Host Header

By default Traefik will pass the incoming Host header to the upstream resource.

However, there are times when you may not want this to be the case. For example, if your service is of the ExternalName type.

Disable globally

Add the following to your TOML configuration file:

  1. disablePassHostHeaders = true

Disable per Ingress

To disable passing the Host header per ingress resource set the traefik.frontend.passHostHeader annotation on your ingress to "false".

Here is an example definition:

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. name: example
  5. annotations:
  6. kubernetes.io/ingress.class: traefik
  7. traefik.frontend.passHostHeader: "false"
  8. spec:
  9. rules:
  10. - host: example.com
  11. http:
  12. paths:
  13. - path: /static
  14. backend:
  15. serviceName: static
  16. servicePort: https

And an example service definition:

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: static
  5. spec:
  6. ports:
  7. - name: https
  8. port: 443
  9. type: ExternalName
  10. externalName: static.otherdomain.com

If you were to visit example.com/static the request would then be passed on to static.otherdomain.com/static, and static.otherdomain.com would receive the request with the Host header being static.otherdomain.com.

Note

The per-ingress annotation overrides whatever the global value is set to. So you could set disablePassHostHeaders to true in your TOML configuration file and then enable passing the host header per ingress if you wanted.

Partitioning the Ingress object space

By default, Traefik processes every Ingress objects it observes. At times, however, it may be desirable to ignore certain objects. The following sub-sections describe common use cases and how they can be handled with Traefik.

Between Traefik and other Ingress controller implementations

Sometimes Traefik runs along other Ingress controller implementations. One such example is when both Traefik and a cloud provider Ingress controller are active.

The kubernetes.io/ingress.class annotation can be attached to any Ingress object in order to control whether Traefik should handle it.

If the annotation is missing, contains an empty value, or the value traefik, then the Traefik controller will take responsibility and process the associated Ingress object.

It is also possible to set the ingressClass option in Traefik to a particular value. Traefik will only process matching Ingress objects. For instance, setting the option to traefik-internal causes Traefik to process Ingress objects with the same kubernetes.io/ingress.class annotation value, ignoring all other objects (including those with a traefik value, empty value, and missing annotation).

Note

Letting multiple ingress controllers handle the same ingress objects can lead to unintended behavior. It is recommended to prefix all ingressClass values with traefik to avoid unintended collisions with other ingress implementations.

Between multiple Traefik Deployments

Sometimes multiple Traefik Deployments are supposed to run concurrently. For instance, it is conceivable to have one Deployment deal with internal and another one with external traffic.

For such cases, it is advisable to classify Ingress objects through a label and configure the labelSelector option per each Traefik Deployment accordingly. To stick with the internal/external example above, all Ingress objects meant for internal traffic could receive a traffic-type: internal label while objects designated for external traffic receive a traffic-type: external label. The label selectors on the Traefik Deployments would then be traffic-type=internal and traffic-type=external, respectively.

Traffic Splitting

It is possible to split Ingress traffic in a fine-grained manner between multiple deployments using service weights.

One canonical use case is canary releases where a deployment representing a newer release is to receive an initially small but ever-increasing fraction of the requests over time. The way this can be done in Traefik is to specify a percentage of requests that should go into each deployment.

For instance, say that an application my-app runs in version 1. A newer version 2 is about to be released, but confidence in the robustness and reliability of new version running in production can only be gained gradually. Thus, a new deployment my-app-canary is created and scaled to a replica count that suffices for a 1% traffic share. Along with it, a Service object is created as usual.

The Ingress specification would look like this:

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. annotations:
  5. traefik.ingress.kubernetes.io/service-weights: |
  6. my-app: 99%
  7. my-app-canary: 1%
  8. name: my-app
  9. spec:
  10. rules:
  11. - http:
  12. paths:
  13. - backend:
  14. serviceName: my-app
  15. servicePort: 80
  16. path: /
  17. - backend:
  18. serviceName: my-app-canary
  19. servicePort: 80
  20. path: /

Take note of the traefik.ingress.kubernetes.io/service-weights annotation: It specifies the distribution of requests among the referenced backend services, my-app and my-app-canary. With this definition, Traefik will route 99% of the requests to the pods backed by the my-app deployment, and 1% to those backed by my-app-canary. Over time, the ratio may slowly shift towards the canary deployment until it is deemed to replace the previous main application, in steps such as 5%/95%, 10%/90%, 50%/50%, and finally 100%/0%.

A few conditions must hold for service weights to be applied correctly:

  • The associated service backends must share the same path and host.
  • The total percentage shared across all service backends must yield 100% (see the section on omitting the final service, however).
  • The percentage values are interpreted as floating point numbers to a supported precision as defined in the annotation documentation.

Omitting the Final Service

When specifying service weights, it is possible to omit exactly one service for convenience reasons.

For instance, the following definition shows how to split requests in a scenario where a canary release is accompanied by a baseline deployment for easier metrics comparison or automated canary analysis:

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. annotations:
  5. traefik.ingress.kubernetes.io/service-weights: |
  6. my-app-canary: 10%
  7. my-app-baseline: 10%
  8. name: app
  9. spec:
  10. rules:
  11. - http:
  12. paths:
  13. - backend:
  14. serviceName: my-app-canary
  15. servicePort: 80
  16. path: /
  17. - backend:
  18. serviceName: my-app-baseline
  19. servicePort: 80
  20. path: /
  21. - backend:
  22. serviceName: my-app-main
  23. servicePort: 80
  24. path: /

This configuration assigns 80% of traffic to my-app-main automatically, thus freeing the user from having to complete percentage values manually. This becomes handy when increasing shares for canary releases continuously.

Production advice

Resource limitations

The examples shown deliberately do not specify any resource limitations as there is no one size fits all.

In a production environment, however, it is important to set proper bounds, especially with regards to CPU:

  • too strict and Traefik will be throttled while serving requests (as Kubernetes imposes hard quotas)
  • too loose and Traefik may waste resources not available for other containers

When in doubt, you should measure your resource needs, and adjust requests and limits accordingly.