Quick Start
A Use Case of Traefik Proxy and Kubernetes
This guide is an introduction to using Traefik Proxy in a Kubernetes environment.
The objective is to learn how to run an application behind a Traefik reverse proxy in Kubernetes.
It presents and explains the basic blocks required to start with Traefik such as Ingress Controller, Ingresses, Deployments, static, and dynamic configuration.
Permissions and Accesses
Traefik uses the Kubernetes API to discover running services.
To use the Kubernetes API, Traefik needs some permissions. This permission mechanism is based on roles defined by the cluster administrator.
The role is then bound to an account used by an application, in this case, Traefik Proxy.
The first step is to create the role. The ClusterRole resource enumerates the resources and actions available for the role. In a file called 00-role.yml
, put the following ClusterRole
:
00-role.yml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-role
rules:
- apiGroups:
- ""
resources:
- services
- secrets
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- traefik.io
resources:
- middlewares
- middlewaretcps
- ingressroutes
- traefikservices
- ingressroutetcps
- ingressrouteudps
- tlsoptions
- tlsstores
- serverstransports
- serverstransporttcps
verbs:
- get
- list
- watch
You can find the reference for this file there.
The next step is to create a dedicated service account for Traefik. In a file called 00-account.yml
, put the following ServiceAccount resource:
00-account.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-account
And then, bind the role on the account to apply the permissions and rules on the latter. In a file called 01-role-binding.yml
, put the following ClusterRoleBinding resource:
01-role-binding.yml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-role
subjects:
- kind: ServiceAccount
name: traefik-account
namespace: default # This tutorial uses the "default" K8s namespace.
roleRef
is the Kubernetes reference to the role created in 00-role.yml
.
subjects
is the list of accounts reference.
In this guide, it only contains the account created in 00-account.yml
Deployment and Exposition
This section can be managed with the help of the Traefik Helm chart.
The ingress controller is a software that runs in the same way as any other application on a cluster.
To start Traefik on the Kubernetes cluster, a Deployment resource must exist to describe how to configure and scale containers horizontally to support larger workloads.
Start by creating a file called 02-traefik.yml
and paste the following Deployment
resource:
02-traefik.yml
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-deployment
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
serviceAccountName: traefik-account
containers:
- name: traefik
image: traefik:v3.2
args:
- --api.insecure
- --providers.kubernetesingress
ports:
- name: web
containerPort: 80
- name: dashboard
containerPort: 8080
The deployment contains an important attribute for customizing Traefik: args
.
These arguments are the static configuration for Traefik.
From here, it is possible to enable the dashboard, configure entry points, select dynamic configuration providers, and more.
In this deployment, the static configuration enables the Traefik dashboard, and uses Kubernetes native Ingress resources as router definitions to route incoming requests.
When there is no entry point in the static configuration
Traefik creates a default one called web
using the port 80
routing HTTP requests.
When enabling the api.insecure mode, Traefik exposes the dashboard on the port 8080
.
A deployment manages scaling and then can create lots of containers, called Pods. Each Pod is configured following the spec
field in the deployment.
Given that, a Deployment can run multiple Traefik Proxy Pods, a piece is required to forward the traffic to any of the instance: namely a Service.
Create a file called 02-traefik-services.yml
and insert the two Service
resources:
02-traefik-services.yml
apiVersion: v1
kind: Service
metadata:
name: traefik-dashboard-service
spec:
type: LoadBalancer
ports:
- port: 8080
targetPort: dashboard
selector:
app: traefik
---
apiVersion: v1
kind: Service
metadata:
name: traefik-web-service
spec:
type: LoadBalancer
ports:
- targetPort: web
port: 80
selector:
app: traefik
It is possible to expose a service in different ways.
Depending on your working environment and use case, the spec.type
might change.
It is strongly recommended to understand the available service types before proceeding to the next step.
It is now time to apply those files on your cluster to start Traefik.
kubectl apply -f 00-role.yml \
-f 00-account.yml \
-f 01-role-binding.yml \
-f 02-traefik.yml \
-f 02-traefik-services.yml
Proxying applications
The only part still missing is the business application behind the reverse proxy.
For this guide, we use the example application traefik/whoami, but the principles are applicable to any other application.
The whoami
application is an HTTP server running on port 80 which answers host-related information to the incoming requests.
As usual, start by creating a file called 03-whoami.yml
and paste the following Deployment
resource:
03-whoami.yml
kind: Deployment
apiVersion: apps/v1
metadata:
name: whoami
labels:
app: whoami
spec:
replicas: 1
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: traefik/whoami
ports:
- name: web
containerPort: 80
And continue by creating the following Service
resource in a file called 03-whoami-services.yml
:
03-whoami-services.yml
apiVersion: v1
kind: Service
metadata:
name: whoami
spec:
ports:
- name: web
port: 80
targetPort: web
selector:
app: whoami
Thanks to the Kubernetes API, Traefik is notified when an Ingress resource is created, updated, or deleted.
This makes the process dynamic.
The ingresses are, in a way, the dynamic configuration for Traefik.
Tip
Find more information on ingress controller, and Ingress in the official Kubernetes documentation.
Create a file called 04-whoami-ingress.yml
and insert the Ingress
resource:
04-whoami-ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: whoami-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: whoami
port:
name: web
This Ingress
configures Traefik to redirect any incoming requests starting with /
to the whoami:80
service.
At this point, all the configurations are ready. It is time to apply those new files:
kubectl apply -f 03-whoami.yml \
-f 03-whoami-services.yml \
-f 04-whoami-ingress.yml
Now you should be able to access the whoami
application and the Traefik dashboard. Load the dashboard on a web browser: http://localhost:8080.
And now access the whoami
application:
curl -v http://localhost/
Going further
- Filter the ingresses to use with IngressClass
- Use IngressRoute CRD
- Protect ingresses with TLS
Using Traefik OSS in Production?
If you are using Traefik at work, consider adding enterprise-grade API gateway capabilities or commercial support for Traefik OSS.
Adding API Gateway capabilities to Traefik OSS is fast and seamless. There’s no rip and replace and all configurations remain intact. See it in action via this short video.